Building a Go Wallet App with a Redis-like Single-Threaded Event Loop Concept

insomnius logo 2Muhammad Arief Rahman

December 01, 2024

15 min read

loading...

golang

algorithm


golang logo

Understanding The Problem

Lately, my friend went through interviews with a few companies and received some interesting take-home assignments. One of them was to build a wallet application using an in-memory database. He asked me how to manage concurrency to ensure the application handles concurrent requests correctly without causing issues in the database.

I'll be honest—I'm not a big fan of take-home tests. While they can showcase practical skills, they often demand a significant time commitment without providing direct interaction or insight into the company's team dynamics or work culture. However, this particular assignment caught my interest because of the fascinating challenges it posed around concurrency and database design.

The requirements for the take-home test were:

  1. Users can be created and have a wallet.
  2. Users can top up their wallet.
  3. Users can transfer money from their wallet to another, but only if their balance is sufficient.

From these requirements, I can already identify potential problems that may arise if the operations aren't handled properly using the ACID principles (Atomicity, Consistency, Isolation, Durability). Issues like lost updates, and write skew could occur. However, since this is an in-memory database, we only need to focus on 3 out of the 4 ACID principles, Atomicity, Consistency, and Isolation. Leaving aside Durability.

Before we dive into our solutions, let's first take a closer look at the problem.

Lost Updates

The lost updates problem occurs when two transactions read and write the same data simultaneously. In such cases, one modification may be lost, and the other might not be properly saved in the database. In the context of a wallet application, this issue can arise when two concurrent requests attempt to update the wallet balance at the same time. For example, refer to the image below:

In this scenario, User 1, the owner of the wallet, wants to top up their balance by IDR 10,000. Meanwhile, User 2 also wants to transfer IDR 10,000 to User 1. Since these transactions are processed independently, they are unaware of each other's state, leading to an unexpected final balance. Instead of the correct total of IDR 25,000, the wallet ends up with a balance of IDR 15,000 after both transactions.

In modern database like PostgreSQL for example, we could solve this problem by using several technique, including:

  1. Atomic Write Operations: By ensuring the update process is atomic, we can guarantee that each transaction is fully processed before another begins. This prevents overlapping operations from interfering with each other.
  2. Explicitly Locking: Locking the data during a transaction ensures that other operations cannot access or modify the data until the current transaction is complete, ensuring data consistency.
  3. Compare and Set: Using the compare and set approach allows you to check if the value has changed before updating it. If it has, the transaction is aborted or retried, preventing lost updates.
-- Atomic write operations
UPDATE wallets SET balance = balance + 10000 WHERE id = 1;

------------------------------
-- Explicitly locking
-- First transactions run concurrently
BEGIN;
-- Applications will use data returned by this query
SELECT * FROM wallets WHERE id = 1 FOR UPDATE;
-- Update by applying the summed value from application calculations
UPDATE wallets SET balance = 15000 WHERE id = 1;
COMMIT;

-- Second Transaction run concurrently
BEGIN;
-- Applications will use data returned by this query
SELECT * FROM wallets WHERE id = 1 FOR UPDATE;
-- Update by applying the summed value from application calculations
UPDATE wallets SET balance = 25000 WHERE id = 1;
COMMIT;

------------------------------
-- Compare and Set

UPDATE wallets SET balance = 15000 WHERE id = 1 and balance = 5000;

Write Skew

Write skew is a concurrency issue that arises when multiple transactions read shared data, perform logic based on those reads, and then write updates without being aware of each other's actions. Unlike a lost update, where the primary issue is overwriting data, write skew involves inconsistencies arising from logical dependencies between reads and writes.

To understand this concept better, le's consider a scenario in the wallet application where we create users based on unique usernames. The application should ensure that no two users can share the same username. Here's how write skew might manifest in this case:

  1. Two users, User A and User B, try to create an account simultaneously with the same username, say insomnius.
  2. Both processes start by checking if insomnius already exists in the database. Since the database is empty, both transactions find that the username is available.
  3. Each transaction then inserts a new record into the database for insomnius.
  4. As a result, the database ends up with two users having the same username, violating the uniqueness constraint.

This issue occurs because the transactions make decisions based on outdated or incomplete information due to concurrent reads and writes. The database wasn't designed to enforce this constraint in isolation, allowing multiple transactions to violate logical consistency.

Most of you may already know how to fix this—using a unique constraint to safeguard the logic. This ensures that even if two processes attempt to insert the same username simultaneously, the database will prevent duplicates by rejecting one of the transactions.

However, while a unique constraint is a simple and effective solution, it might not always be the most optimal choice, especially in complex scenarios. Other possible solutions include:

  1. Utilizing staging tables: This approach involves using a temporary or staging table (e.g., user_to_be_created) to handle username creation requests. All operations would first insert the username into this staging table. Once confirmed, the username would then be inserted into the main users table, and the entry in the staging table would be deleted. This avoids direct concurrent writes on the primary users table and reduces contention by isolating operations.
  2. Explicitly locking row dependencies: By locking specific rows during a transaction, this method ensures that no other transaction can read or write the locked data until the current transaction is complete. While this guarantees consistency, it may introduce contention and impact performance in high-concurrency environments.
-- Unique constraint
-- *This is the most efficient and recommended solution to ensure unique usernames.
CREATE TABLE users (
    id SERIAL PRIMARY KEY,
    username VARCHAR(255) UNIQUE NOT NULL,
    wallet_balance NUMERIC DEFAULT 0
);

I'll explore other examples in a separate article, as applying them to the current use case would be impractical and unrealistic. That will allow for a more focused discussion with relevant scenarios and context.


On Database Isolation Level

Before we move forward, we need to understand database isolation levels. Isolation levels define the degree to which a transaction is isolated from other transactions in a database system. They play a vital role in ensuring data consistency, accuracy, and concurrency by determining how transactions interact with each other. Database isolation levels are a trade-off between performance and consistency, with higher isolation levels providing stricter rules at the cost of slower performance.

The SQL standard defines four isolation levels: READ UNCOMMITTED, READ COMMITTED, REPEATABLE READ, and SERIALIZABLE. These levels progressively increase in strictness, addressing different types of anomalies that can occur during concurrent database access.

Isolation Level \ Read phenomenonDirty ReadNon Repeatable ReadPhantom ReadPerformance RankConsistency Rank
Read UncommittedYesYesYes⭐⭐⭐⭐
Read CommittedNoYesYes⭐⭐⭐⭐⭐
Repeatable ReadNoNoYes⭐⭐⭐⭐⭐
SerializableNoNoNo⭐⭐⭐⭐
  • READ UNCOMMITTED: Allows reading uncommitted data, leading to dirty reads. Fastest performance but least accurate.
  • READ COMMITTED: Prevents dirty reads but allows non-repeatable reads and phantom reads. Balances performance and consistency.
  • REPEATABLE READ: Prevents dirty and non-repeatable reads but allows phantom reads. Ideal for repeated access to the same rows.
  • SERIALIZABLE: Prevents dirty, non-repeatable, and phantom reads. Ensures the highest consistency but has the highest performance cost.

In modern databases such as PostgreSQL, the default isolation level is READ COMMITTED. However, it can be configured to support up to the SERIALIZABLE isolation level if required documentation here.

Database Anomalies

Database anomalies refer to inconsistencies or unexpected behaviors that can occur when multiple transactions interact concurrently under different isolation levels. These anomalies are primarily caused by the lack of appropriate isolation between transactions. The main types of database anomalies are dirty reads, non-repeatable reads, and phantom reads.

Dirty reads

A dirty read (aka uncommitted dependency) occurs when a transaction retrieves a row that has been updated by another transaction that is not yet committed.

Loading...

Non-repeatable reads

A non-repeatable read occurs when a transaction retrieves a row twice and that row is updated by another transaction that is committed in between.

Loading...

Phantom reads

A phantom read occurs when a transaction retrieves a set of rows twice and new rows are inserted into or removed from that set by another transaction that is committed in between.

Loading...

The following table outlines the default isolation levels for popular databases, detailing their behaviors, purposes, and sources.

DatabaseDefault Isolation LevelDetailsSource
MySQL (InnoDB)Repeatable ReadEnsures consistent reads for the duration of a transaction.MySQL Documentation
PostgreSQLRead CommittedOnly sees data committed before the transaction began.PostgreSQL Documentation
SQL ServerRead CommittedUses row versioning to achieve consistency.SQL Server Documentation
OracleRead CommittedProvides a snapshot of data committed before the query.Oracle Documentation
SQLiteSerializable (via Write-Ahead Logging)Strong consistency by locking entire database.SQLite Documentation

Summary

Isolation levels manage the trade-off between performance and consistency by controlling how transactions interact. Stricter levels ensure fewer anomalies but come with higher performance costs. Modern databases prioritize performance with configurable options for stricter isolation when needed.


On Database Serializability

After we understand about database isolation levels, the next step is choosing how to implement the isolations we plan to use. Nowadays, there are three popular solutions widely adopted by modern databases for achieving serializability:

  1. Actual Serial Execution
  2. Two Phase Locking (2PL)
  3. Multi-version Conccurency Control

Let's explore each of these methods in detail.

Actual Serial Execution

Actual serial execution handles concurrency by removing it entirely, using a single-threaded process. A notable example of a database that uses this approach is Redis. This method is viable because RAM is becoming cheaper, making it feasible to store entire datasets in memory for certain use cases. However, a limitation of this approach is that it cannot fully utilize CPU threads, as only one thread runs on a single processor at a time.

Serial execution is the simplest concurrency control mechanism that processes transactions one at a time in a sequential order.

  • Strict Ordering: Each transaction executes completely before the next one begins.
  • Conflict Resolution: No concurrency conflicts can occur since there is no parallel execution.
  • Data Consistency: Always maintains database consistency without complex locking or versioning.

This approach guarantees correctness but severely limits throughput since only one transaction can execute at a time, regardless of whether they access the same data.

While inefficient for most multi-user systems, serial execution is used in certain specialized databases or for critical transactions that cannot risk conflicts. Some modern in-memory databases use this approach for simplicity and predictability.

Initial State

Three transactions are waiting to be processed by the database. In serial execution, they will be executed one at a time to ensure data consistency.

Two-Phase Locking (2PL)

Two-Phase Locking (2PL) is a battle-proven algorithm that has been reliably used for over 30 years. It uses pessimistic concurrency control, ensuring strict consistency by managing locks carefully during transaction execution. This approach is widely adopted by databases such as MySQL with its InnoDB engine.

The 2PL algorithm uses two types of locks: shared locks and exclusive locks. In strict 2PL implementations, it's common for read operations to block write operations and vice versa. However, a notable trade-off of this algorithm is the potential for deadlocks.

Two-Phase Locking is a concurrency control protocol that ensures serializability by dividing transaction execution into two distinct phases:

  • Growing Phase: Transactions can only acquire locks, but cannot release any locks.
  • Shrinking Phase: Transactions can only release locks, but cannot acquire any new locks.

The key rule is that once a transaction releases any lock (enters shrinking phase), it can never acquire another lock. This prevents cascading aborts and ensures conflict serializability.

2PL guarantees serializability but can lead to deadlocks when transactions wait for each others locks in a cycle. Deadlock detection or prevention mechanisms are often needed.

Initial state - Transactions ready to execute with Two-Phase Locking
Growing Phase
Shrinking Phase

Multi-Version Concurrency Control (MVCC)

Multi-Version Concurrency Control (MVCC), also referred to as Serializable Snapshot Isolation (SSI), has been adopted by PostgreSQL since version 9.1 (source). First introduced in 2008, SSI is relatively newer compared to Two-Phase Locking (2PL).

Unlike 2PL, MVCC relies solely on write locks, avoiding the need for both shared and exclusive locks. This design reduces performance overhead while still maintaining the same high level of consistency. Consequently, MVCC provides a more efficient approach to concurrency control without compromising data integrity.

MVCC is a concurrency control method that creates a new version of a data item for each write operation, rather than overwriting old data.

  • Timestamp-Based Access: Each transaction is assigned a timestamp when it starts.
  • Read Operations: Read the most recent version of data that was committed before the transactions timestamp.
  • Write Operations: Create a new version of data with the transactions timestamp.

This approach allows read operations to proceed without blocking write operations, dramatically improving concurrency for read-heavy workloads.

MVCC is used in PostgreSQL, Oracle, MySQL InnoDB, and many modern databases. It eliminates read-write conflicts but requires periodic garbage collection to remove old versions.

Initial state - Database with single version of each record
Read Operation
Write Operation
Garbage Collection

Summary

Concurrency control methods are essential for ensuring data consistency and managing the trade-offs between performance and isolation levels in database systems. Below is a comparison of three common approaches—Actual Serial Execution, MVCC (Serializable Snapshot Isolation), and Two-Phase Locking (Strict Serializable)—across key performance and operational attributes.

Method \ AttributePerformance RankConsistency RankDeadlock RiskLock Overhead
Actual Serial Execution⭐⭐⭐⭐⭐NoNone
MVCC (Serializable Snapshot Isolation)⭐⭐⭐⭐⭐⭐⭐⭐MinimalMinimal
Two Phase Locking (Strict Serializable)⭐⭐⭐⭐⭐⭐YesHigh
  1. Actual Serial Execution: Represents the strictest form of concurrency control, where transactions are executed one at a time, ensuring perfect isolation but with very low performance.
  2. MVCC: Balances high consistency with better performance by maintaining multiple versions of data. Avoids locking for reads but may introduce additional storage costs.
  3. Two Phase Locking: Guarantees strict serializability using shared and exclusive locks but can lead to deadlocks and higher overhead due to lock management.

Choosing Right Strategy

Before we jump into solutions, first we need to write about what we aim and what kind of trade-off we can take which match perfectly with our requirements and needs.

Strategy Selection

After evaluating the options, serial execution is the most suitable choice for the following reasons:

  1. Ease of Implementation: serial execution is straightforward to implement without requiring complex setups.
  2. Data Fits in Memory: since this is a take-home test, the dataset is small enough to fit comfortably in application memory.
  3. No Need to Maintain Durability: since we aren't focused on long-term persistence, serial execution eliminates unnecessary overhead.
  4. Highest Level of Serializability: it ensures operations execute in a strictly sequential manner, maintaining data consistency and reducing potential conflicts.

Given these factors, serial execution provides the simplicity, efficiency, and reliability needed for the use case at hand.

Solution Design

After determining the best strategy for our use cases, the next step is to outline a high-level design. This includes defining the structure of what we're building, the data types we'll use in Golang, and the operations involved:

  1. Data Structure: We'll use map[string]any for storing data, avoiding sync.Map to reduce mutex locking overhead during data insertion.
  2. API Layer: The HTTP API will be implemented using the echo framework.
  3. Database Event Loop Mechanism: A single goroutine will be spawned to handle the event loop, utilizing channels for communication.

Single-Threaded Event Loop Database

Demonstrates how the wallet database maintains data consistency using a single-threaded event loop while allowing non-blocking operations outside the database.

Key Features

  • Non-blocking read operations
  • Ordered write operations via queue
  • Single-threaded write processing
  • Atomic transactions support

Benefits

  • Prevents race conditions
  • Ensures data consistency
  • Optimized read performance
  • Predictable write ordering
Handler receives an API request and forwards it to the appropriate repository layer, demonstrating the entry point of database operations
Step 1 of 5
Read Operation
Write Operation
Commit

On Implementations

Here, we examine the implementation details of the core features of our application, including the event loop, transaction management, and API endpoints. We provide code examples and explanations to illustrate how these components work together to ensure data consistency and reliable operation.

Implementing Serial Execution

Our solution leverages serial execution to ensure data consistency. A single goroutine manages an event loop, processing operations sequentially from a channel. This eliminates concurrency issues like lost updates and write skew. The code below demonstrates the core components of this implementation.

// file: instance.go

// Start database daemon
func (i *Instance) Start() {
	i.operationOpen.Store(true)

	for op := range i.operationChan {
		if !i.operationOpen.Load() {
			// operation already close in here
			continue
		}
		i.operationWg.Add(1)

		// Wrap it with function, to handle panic cases.
		err := func() (err2 error) {
			defer func() {
				if v := recover(); v != nil {
					err2 = fmt.Errorf("error %v", v)
				}
				i.operationWg.Done()
			}()
			return op.op(i)
		}()
		op.result <- err
	}
}

func (i *Instance) enqueueProcess(f func(*Instance) error, operationName string) error {
	opArgument := operationArgument{
		op:        f,
		result:    make(chan error, 1),
		operation: operationName,
	}
	i.operationChan <- opArgument

	return <-opArgument.result

}

func (i *Instance) Close() {
	// Close the booelan, so that wont be upcoming request
	i.operationOpen.Store(false)

	// Wait for in-progress request to finish
	i.operationWg.Wait()

	// Lastly, close the channel
	close(i.operationChan)
}

Implementing Transactions

Transactions are fundamental for ensuring atomicity and consistency. Our implementation uses a Transaction struct to track changes within a transaction. The Transaction method allows users to define a series of operations that either succeed or fail as a single unit. Changes are only applied to the database upon successful completion of all operations within the transaction.

// file: instance.go

func (i *Instance) Transaction(f func(*Transaction) error) error {
	op := func(x *Instance) error {
		transaction := &Transaction{
			tables:  x.tables,
			changes: make(map[string]map[string]any),
		}

		if err := f(transaction); err != nil {
			// rollback don't do anything
			return err
		}

		// commit
		for table, change := range transaction.changes {
			assertedTable := x.tables[table]

			for primaryKey, row := range change {
				assertedTable[primaryKey] = row
			}
		}

		return nil
	}

	return i.enqueueProcess(op, "transaction")
}


// ......

// file: transaction.go

type Transaction struct {
	tables  map[string]map[string]any
	changes map[string]map[string]any
}

func (t *Transaction) GetTable(tableName string) (*Table, error) {
	table, found := t.tables[tableName]
	if !found {
		return nil, ErrTableIsNotFound
	}

	if _, found := t.changes[tableName]; !found {
		t.changes[tableName] = make(map[string]any)
	}

	// identification use only
	clonedInstance := NewInstance()
	clonedInstance.transactionIdentifier = "sub"

	return &Table{
		data: table,
		enqueueProcess: func(f func(*Instance) error, operationName string) error {
			return f(clonedInstance)
		},
		changes: t.changes[tableName],
	}, nil
}

The Table Abstraction: Managing Data and Changes

Our database implementation uses a Table struct to represent and manage data within a table. This struct provides an abstraction layer over the raw data storage and handles both direct data access and transactional changes. Here's a breakdown of its key components and methods:

type Table struct {
	data           map[string]any
	changes        map[string]any
	enqueueProcess func(f func(*Instance) error, operationName string) error
}
  1. data map[string]any: This map holds the actual, persistent data of the table. Keys are strings (representing IDs), and values are of type any.
  2. changes map[string]any: This map stores modifications made during a transaction. It acts as a temporary staging area. Changes here are not immediately reflected in data until the transaction commits.
  3. enqueueProcess func(f func(*Instance) error, operationName string) error: This function is the crucial link to the database's event loop. It allows the Table to submit operations (like data modifications) to the event loop for serial execution.

API Layer: Defining the Endpoints

Our wallet application exposes its functionality through a well-defined API. We use the Echo framework for handling HTTP requests. The API endpoints are designed to provide access to user authentication, wallet balances, and transaction management. Here's an overview of the defined routes:

e.POST("/users", handler.UserRegister(authAggregator))
e.POST("/users/signin", handler.UserSignin(authAggregator))

oauthMiddleware:= middleware.Oauth(func(c echo.Context, token string) (bool, error) {
    t, err:= userTokenRepo.FindByToken(token)
    if err!= nil {
        return false, err
    }

    c.Set("current_user", t)
    return true, nil
})

e.GET("/wallet", handler.CheckBalance(walletRepo), oauthMiddleware)
e.GET("/wallet/top-transfer", handler.TopTransfer(mutationRepo), oauthMiddleware)
e.POST("/transactions/topup", handler.TopUp(trxAggregator), oauthMiddleware)
e.POST("/transactions/transfer", handler.Transfer(trxAggregator), oauthMiddleware)

Performance Benchmarks

We conducted benchmarks to evaluate the performance of our database and transfer process. The results are summarized below:

BenchmarkOperations/SecondNanoseconds/OperationMemory/OperationAllocations/Operation
DB Transaction~48,58022,45940,989 B182
Transfer Process~296,4273,56310,936 B31

These benchmarks were run on a 12th Gen Intel(R) Core(TM) i5-12400F processor.

Feature Testing with a Bash Script

To ensure the correct functionality of our wallet application, we've created a Bash script for end-to-end feature testing. This script automates a series of API calls to simulate user registration, login, wallet top-up, and fund transfers. It uses curl for making HTTP requests and jq for parsing JSON responses. Here's the script:

#!/bin/bash

echo "=== Registering Users ==="
curl -s localhost:8000/users -XPOST --data '{"email":"[email protected]","password":"rahasia"}' || echo "Failed to register User 1"
curl -s localhost:8000/users -XPOST --data '{"email":"[email protected]","password":"rahasia"}' || echo "Failed to register User 2"

echo -e "\n=== Signing in Users ==="
TOKEN_USER_1=$(curl -s localhost:8000/users/signin -XPOST --data '{"email":"[email protected]","password":"rahasia"}' | jq -r '.data.token')
echo "TOKEN USER 1: $TOKEN_USER_1"
TOKEN_USER_2=$(curl -s localhost:8000/users/signin -XPOST --data '{"email":"[email protected]","password":"rahasia"}' | jq -r '.data.token')
echo "TOKEN USER 2: $TOKEN_USER_2"

echo -e "\n=== Fetching Wallet Info ==="
USER_ID_1=$(curl -s localhost:8000/wallet -H "Authorization: Bearer $TOKEN_USER_1" | jq -r '.data.user_id')
echo "User 1 ID: $USER_ID_1"
USER_ID_2=$(curl -s localhost:8000/wallet -H "Authorization: Bearer $TOKEN_USER_2" | jq -r '.data.user_id')
echo "User 2 ID: $USER_ID_2"

echo -e "\n=== Topping Up Wallets ==="
echo "User 1 Top-Up: 100"
curl -s localhost:8000/transactions/topup -H "Authorization: Bearer $TOKEN_USER_1" --data '{"amount":100}' | jq
echo "User 2 Top-Up: 500"
curl -s localhost:8000/transactions/topup -H "Authorization: Bearer $TOKEN_USER_2" --data '{"amount":500}' | jq

echo -e "\n=== Checking Wallet Balances ==="
echo "User 1 Wallet Balance:"
curl -s localhost:8000/wallet -H "Authorization: Bearer $TOKEN_USER_1" | jq
echo "User 2 Wallet Balance:"
curl -s localhost:8000/wallet -H "Authorization: Bearer $TOKEN_USER_2" | jq

echo -e "\n=== Transferring Funds ==="
echo "User 1 Transfer to User 2: 20"
curl -s localhost:8000/transactions/transfer -H "Authorization: Bearer $TOKEN_USER_1" --data "{\"amount\":20,\"to\":\"$USER_ID_2\"}" | jq
echo "User 1 Transfer to User 2: 10"
curl -s localhost:8000/transactions/transfer -H "Authorization: Bearer $TOKEN_USER_1" --data "{\"amount\":10,\"to\":\"$USER_ID_2\"}" | jq
echo "User 1 Transfer to User 2: 30"
curl -s localhost:8000/transactions/transfer -H "Authorization: Bearer $TOKEN_USER_1" --data "{\"amount\":30,\"to\":\"$USER_ID_2\"}" | jq

echo "User 2 Transfer to User 2: 40"
curl -s localhost:8000/transactions/transfer -H "Authorization: Bearer $TOKEN_USER_2" --data "{\"amount\":40,\"to\":\"$USER_ID_1\"}" | jq
echo "User 2 Transfer to User 2: 20"
curl -s localhost:8000/transactions/transfer -H "Authorization: Bearer $TOKEN_USER_2" --data "{\"amount\":20,\"to\":\"$USER_ID_1\"}" | jq

echo -e "\n=== Final Wallet Balances ==="
echo "User 1 Wallet Balance:"
curl -s localhost:8000/wallet -H "Authorization: Bearer $TOKEN_USER_1" | jq
echo "User 2 Wallet Balance:"
curl -s localhost:8000/wallet -H "Authorization: Bearer $TOKEN_USER_2" | jq

echo -e "\n=== Top Transfer ==="
echo "User 1 Top Transfer:"
curl -s localhost:8000/wallet/top-transfer -H "Authorization: Bearer $TOKEN_USER_1" | jq
echo "User 2 Top Transfer:"
curl -s localhost:8000/wallet/top-transfer -H "Authorization: Bearer $TOKEN_USER_2" | jq

This script, when executed, performs the following actions:

  1. Registers two users.
  2. Logs in the users and retrieves their authentication tokens.
  3. Retrieves the user IDs.
  4. Tops up the wallets of both users.
  5. Verifies the wallet balances.
  6. Performs several fund transfers between the users.
  7. Checks the final wallet balances to confirm the transfers were successful.
  8. Retrieves and verifies the top transfer history.

Test Result: Success

All test cases within the script executed successfully, demonstrating the correct behavior of the wallet application's core features. The final wallet balances matched the expected values after the transfers, confirming the accuracy of the transaction logic. The top transfer history also correctly reflected the performed transactions. This automated testing provides confidence in the reliability and integrity of our wallet application.

You can see the test results image HERE.

Conclusion

This article explored building a simplified in-memory wallet application using Go, emphasizing ACID principles (atomicity, consistency, isolation) and concurrency control. We demonstrated how a single-threaded event loop effectively mitigates concurrency issues like lost updates and write skew. We discussed database isolation levels and serializability, justifying our choice of serial execution for this in-memory, non-persistent use case. The implementation showcased the event loop, transactions, and API design. Benchmarks and feature testing validated the application's performance and correctness. While focused on a simplified scenario, the core concepts of ACID properties and concurrency management remain crucial for any robust data-driven application. For larger datasets or persistent systems, mechanisms like MVCC or 2PL would be necessary. The complete code for this project is available on GitHub: https://github.com/insomnius/wallet-event-loop/

Test Your Knowledge

Which database anomaly occurs when a transaction reads data that has been modified by another transaction that hasn't yet been committed?

Lost update
Dirty read
Non-repeatable read
Phantom read

Which isolation level provides the highest consistency but lowest performance?

READ UNCOMMITTED
READ COMMITTED
REPEATABLE READ
SERIALIZABLE

Which concurrency control method completely eliminates concurrency by processing operations one at a time?

Two-Phase Locking (2PL)
Multi-Version Concurrency Control (MVCC)
Actual Serial Execution
Optimistic Concurrency Control

Which of the following is NOT one of the ACID principles discussed in the article?

Atomicity
Consistency
Isolation
Durability

What concurrency issue occurs when multiple transactions read shared data, make decisions based on those reads, and then write updates without being aware of each other's actions?

Write skew
Lost update
Phantom read
Non-repeatable read

Share this post