§Scalability
-
concepts
- Vertical scaling: CPU(more cores, L2 Cache…), Disk(PATA, SATA, SAS…; RAID), RAM…
- Horizontal scaling: get a bunch of slower or at least cheaper machines instand of plural number of machines
- Caching
- Load balancing: traffic come from people -> distributed or balanced
- Database replication
- have master-slave; in case of database dies; if more read heavy > write heavy => select statements go to server 2,3,4, any inserts, updates or deletes go to go to server 1
- have master-master: can write any server; if connection down, you can read from master
- Database partitioning
- Using NoSQL instead of scaling a relational database
- Being asynchronous
- how to deal with the two major bottlenecks: handling a lot of users, and handling a lot of data
-
eg.
- Uber: A nice article about how Uber had to scale fast, about breaking your service into many micro services spread across many repos
- Facebook: How Facebook handles 800,000 simultaneous viewers on a live stream
- Twitter: How Twitter handles 3,000 image uploads per second and why the old ways it used would not work nowadays. Twitter subcomponents: Storing data (video | text), and Timeline (video | text).
- Salesforce: A relatively short example from Salesforce.
- Google, Youtube (video | text), Tumblr, StackOverflow, and Datashift.
-
Warp-up
You first build a high-level architecture by identifying the constraints and use cases, sketching up the major components and the relationships between them, and thinking about the system’s bottlenecks. You then apply the right scalability patterns that will take care of these bottlenecks in the context of the system’s constraints.
-
eg. Scalable design
-
Application Service layer
- Start with one machine
- Measure how far it takes us (load tests)
- Add a load balance + a cluster of machines over time: to deal with spike-y traffic, to increase availability
-
Data storage
-
Billions of objects
-
Each object is fairly smaill(<1K)
-
There are no relationships between the objects
-
Reads are 9x more frequent than writes (360 reads, 40 writes per second)
-
3TBs of urls, 36GB of hases
MySQL:
- Widely used
- Mature technology
- Clear acling paradigms(sharding, master/slave replication, master/master replication)
- Used by Facebook, Twitter, Google, etc.
- Index lookups are very fast
mappings
- hash: varchar(6)
- original_url: varchar(512)
-
Create a uniquue index on the hash(36GB+). We want to hold it in memory.
-
Option 1: Vertival sacling of the MySQL machine
-
Option 2: Partition the data: 5 partitions, 600GB of data, 8GB of indexes
-
Master-slave (reading from the slaves, writes to the master)
=>
- Use one MySQL table with two varchar fields
- Create a uniquue index on the hash(36GB+). We want to hold it in memory to speed up lookups.
- Vertival sacling of the MySQL machine for a while
- Eventually, partition the data by taking the first chr of the hash mod the number of partitions. Thinks about a master-slave setup.
-
-
-
-
Example: The Twitter Problem, The Summarization Problem