Skip to main content

Chapter#6

 

Consistency Model (Simplified)

Example:

  • Imagine row X is stored on two servers, M and N.

  • A user (Client A) updates row X on server M.

  • After some time, another user (Client B) reads row X from server N.

  • The consistency model determines if Client B will:

    • Definitely see the update,

    • Definitely not see it, or

    • Might or might not see it (uncertain).


Conflicting Operations

  • Read-Write Conflict: A read and a write happen at the same time.

  • Write-Write Conflict: Two writes happen at the same time.


Why Replicate Data?

  • Main reasons:

    • Reliability (backup copies)

    • Performance (faster access)

  • The downside:

    • Having many copies can lead to inconsistency if updates aren’t made to all of them.

  • To maintain consistency, all copies must be updated correctly and on time.


What is a Consistency Model?

A memory model is like a rulebook. It tells the computer how to manage memory when many things (like different parts of your program) are happening at once. It’s like a contract between your program and the computer. If your code follows the rules, the computer promises to keep memory working the right way. The memory model helps make sure everything stays organized and predictable when programs share memory and run at the same time


  • It’s a set of rules that describe how memory should behave for programmers.

  • Helps match the programmer’s expectations with what the system actually does.

  • Acts like a contract:

    • If software follows certain rules, the memory system guarantees proper behavior.

  • It defines when and how memory changes become visible to processors in shared memory systems.


Consistency vs. Coherence

  • Coherence: Ensures all CPUs see writes to the same variable in the same order.

  • Consistency: Ensures all CPUs see writes to all variables in the right order.

Coherence (Think: One Variable at a Time)

Coherence means:

All CPUs must see changes to a single variable in the same order.

Example:
Imagine there's one whiteboard (a variable), and many people (CPUs) are watching it.
If someone writes on it:

  • Everyone should see the same order of changes — like “A, then B, then C” — not “B, then A” or “C, then B”.

So, coherence is about making sure everyone agrees on what happened and when — for one variable.


Consistency (Think: All Variables Together)

Consistency means:

All CPUs must see all changes across all variables in the correct order.

Example:
Now imagine there are multiple whiteboards (multiple variables).
If a person writes on whiteboard 1, then on whiteboard 2,
everyone else should see those changes in the same order — not mixed up.

So, consistency is about the overall story being the same for everyone — across all variables.





Two Ways to Define Consistency Models

  1. Issue: What rules control when a process can perform operations.

  2. View: What order of operations each process sees.





Types of Consistency Models

There are two main categories:

  • Data-Centric Consistency Models

  • Client-Centric Consistency Models


Common Consistency Models

  1. Strict Consistency

  2. Sequential Consistency

  3. Causal Consistency

  4. PRAM (or Processor) Consistency

  5. Weak Consistency

  6. Eventual Consistency

  7. Release Consistency


Strict Consistency

  • The strongest model.

  • Any write is immediately visible to all processors.

  • Rule: A read always shows the most recent write.

  • Drawback: High communication cost due to many messages.

  • Limitation: Assumes no two writes can happen at the same time (which isn’t always true).


Sequential Consistency

  • Weaker than strict consistency.

  • Operations from all processors appear in some sequential order.

  • Simple and easier to understand.

  • Doesn’t guarantee best performance.

  • All actions must appear in a single, global sequence.


Sequential vs. Strict Consistency

  • Strict: Always shows the latest value.

  • Sequential: Shows operations in some order, not necessarily the latest.


Causal Consistency

  • Weaker than sequential consistency.

  • If one operation depends on another, all processors must see them in the same order.

  • If two operations are independent, they can be seen in different orders.


Weak Consistency

  • Doesn’t require every write to be seen immediately.

  • Synchronization points (like locks) ensure data consistency.

  • Before accessing data, all prior writes must be completed.


Eventual Consistency

  • Based on client-centric model.

  • Data updates are eventually reflected everywhere.

  • Common in systems with one writer and many readers (e.g., admin updates user database).

  • No write-write conflicts since only specific variable can update.


Client-Centric Consistency Models

1. Monotonic Reads

  • Once a process reads a value, future reads will show the same or newer value.

  • Useful for reading emails or calendar events across servers.

2. Monotonic Writes

  • A process’s writes happen in the correct order across all servers.

  • Ensures consistent file versions or software builds.


PRAM (Pipelined RAM) Consistency

  • Weaker model; focuses on write operations only.

  • Writes from the same processor are seen in order.

  • Writes from different processors can be seen in any order.

  • Also known as FIFO Consistency.

  • Simple and easy to implement.


Release Consistency

  • Improves on weak consistency.

  • Uses two synchronization operations:

    • Acquire: Enter critical section.

    • Release: Exit critical section.

  • updates are shared at synchronization points.

Comments

Popular posts from this blog

Chap#10

Network topologies Definition: Network topologies define how nodes (processors/computers) are interconnected in parallel and distributed systems. The choice of topology affects performance, scalability, and cost. Key Metrics: Degree: Number of links per node. (Formula: deg = connections per node) Example: In a linear array, each node (except ends) has 2 links. Diameter: Longest shortest path between any two nodes. (Formula: diam = max distance) Example: Linear array with 8 nodes has diameter 7 (P₀ to P₇). Bisection Width: Minimum links to cut to split the network into two halves. (Formula: bw = min cuts) Example: Binary tree has bw=1 (cutting the root disconnects it).4 1. Linear Array Define : Nodes are connected one after another in a straight line. Each node (except the ends) connects to two neighbors one on the left and one on the right. Explanation : Simple to build and easy to understand, but not efficient for large networks. Long distance between farthest nodes makes comm...
Asymmetric-key algorithms are algorithms used in cryptography that use two different keys  a public key for encryption and a private key for decryption. These keys are mathematically related, but the private key cannot be easily derived from the public key. Types: RSA (Rivest–Shamir–Adleman): It uses large prime numbers to generate the key pair and supports both encryption and digital signatures DSA (Digital Signature Algorithm): DSA is primarily used for creating digital signatures, ensuring the authenticity. Symmetric-key algorithms are algorithms for cryptography that use the same cryptographic keys for both encryption of plaintext and decryption of ciphertext  Types: Stream Cipher:  Stream Cipher Converts the plain text into cipher text by taking 1 byte of plain text at a time. Block cipher: Converts the plain text into cipher text by taking plain text's block at a time DES? DES stands for Data Encryption Standard . It is a symmetric-key algorithm used to enc...

Ai Mental Health & Cyber Safety Presentation

Module A - The Normalization Engine Linguistic Challenge: Roman Urdu lacks standardized orthography (e.g., "kesa" vs "kaisa"), creating orthographic "noise" that significantly degrades the accuracy of downstream AI models. Technical Role: Acts as a Sequence-to-Sequence (Seq2Seq) transliteration and lexical normalization layer to standardize inputs before analysis. Model: A specialized transformer architecture, specifically m2m100 fine-tuned on parallel corpora or UrduParaphraseBERT. Primary Dataset: Roman-Urdu-Parl (RUP). A large-scale parallel corpus of 6.37 million sentence pairs designed to support machine transliteration and word embedding training. Link: https://arxiv.org/abs/2503.21530 Outcome: Reduces orthographic noise by achieving up to 97.44% Char-BLEU accuracy for Roman-Urdu to Urdu conversion, ensuring Module B receives high-quality "clean" data for risk analysis. Module B - Risk Stratification (BERT) Heading: The "Safety ...