GoSuda

Go Concurrency Starter Pack

By snowmerak
views ...

Overview

Brief Introduction

The Go language offers numerous tools for concurrency management. This article will introduce some of these tools and associated techniques.

Goroutine?

A goroutine is a new form of concurrency model supported by the Go language. Typically, to perform multiple tasks concurrently, a program obtains OS threads from the operating system, executing tasks in parallel up to the number of available cores. For achieving concurrency at a finer granularity, green threads are created in userland, allowing multiple green threads to operate within a single OS thread. However, goroutines represent an even smaller and more efficient form of such green threads. Goroutines consume less memory than threads and can be created and switched more rapidly than threads.

To utilize a goroutine, one simply needs to employ the go keyword. This enables the intuitive execution of synchronous code asynchronously during program development.

 1package main
 2
 3import (
 4    "fmt"
 5    "time"
 6)
 7
 8func main() {
 9    ch := make(chan struct{})
10    go func() {
11        defer close(ch) // Close the channel when the goroutine finishes
12        time.Sleep(1 * time.Second) // Wait for 1 second
13        fmt.Println("Hello, World!") // Print "Hello, World!"
14    }()
15
16    fmt.Println("Waiting for goroutine...") // Print "Waiting for goroutine..."
17    for range ch {} // Wait for the channel to close
18}

This code transforms a synchronous operation, which pauses for one second then prints "Hello, World!", into an asynchronous flow. While this example is straightforward, converting more complex synchronous code to asynchronous using goroutines significantly enhances code readability, visibility, and comprehension compared to methods like async/await or promises.

However, in many instances, poor goroutine code can be produced when the flow of merely calling synchronous code asynchronously or understanding patterns like fork & join (which resembles a divide-and-conquer approach) is not fully grasped. This article will introduce several methods and techniques to mitigate such situations.

Concurrency Management

context

The appearance of context as the first management technique might seem unexpected. However, in the Go language, context extends beyond simple cancellation, playing a crucial role in managing the entire task hierarchy. For those unfamiliar, a brief explanation of this package follows.

 1package main
 2
 3func main() {
 4    ctx, cancel := context.WithCancel(context.Background()) // Create a cancellable context
 5    defer cancel() // Ensure cancellation when the main function exits
 6
 7    go func() {
 8        <-ctx.Done() // Wait for the context to be cancelled
 9        fmt.Println("Context is done!") // Print "Context is done!"
10    }()
11
12    time.Sleep(1 * time.Second) // Wait for 1 second
13
14    cancel() // Cancel the context
15
16    time.Sleep(1 * time.Second) // Wait for 1 second to observe the effect
17}

The code above utilizes context to print "Context is done!" after one second. context allows checking for cancellation via the Done() method and provides various cancellation mechanisms through methods such as WithCancel, WithTimeout, WithDeadline, and WithValue.

Let us construct a simple example. Suppose you are writing code to retrieve user, post, and comment data using an aggregator pattern, and all requests must complete within two seconds. This can be implemented as follows:

 1package main
 2
 3func main() {
 4    ctx, cancel := context.WithTimeout(context.Background(), 2 * time.Second) // Create a context with a 2-second timeout
 5    defer cancel() // Ensure cancellation when the main function exits
 6
 7    ch := make(chan struct{}) // Create a channel to signal completion of data fetching
 8    go func() {
 9        defer close(ch) // Close the channel when the goroutine finishes
10        user := getUser(ctx) // Fetch user data
11        post := getPost(ctx) // Fetch post data
12        comment := getComment(ctx) // Fetch comment data
13
14        fmt.Println(user, post, comment) // Print the fetched data
15    }()
16
17    select {
18    case <-ctx.Done(): // If the context is cancelled (e.g., due to timeout)
19        fmt.Println("Timeout!") // Print "Timeout!"
20    case <-ch: // If all data is fetched
21        fmt.Println("All data is fetched!") // Print "All data is fetched!"
22    }
23}

The code above prints "Timeout!" if all data is not fetched within two seconds, and "All data is fetched!" otherwise. By using context in this manner, cancellation and timeouts can be readily managed even in code involving multiple goroutines.

Various context-related functions and methods are available for review at godoc context. It is hoped that you will learn and comfortably utilize the basic functionalities.

channel

unbuffered channel

A channel serves as a mechanism for communication between goroutines. A channel can be created using make(chan T), where T specifies the data type the channel will transmit. Data can be sent and received through a channel using <-, and a channel can be closed using close.

 1package main
 2
 3func main() {
 4    ch := make(chan int) // Create an unbuffered channel of integers
 5    go func() {
 6        ch <- 1 // Send 1 to the channel
 7        ch <- 2 // Send 2 to the channel
 8        close(ch) // Close the channel
 9    }()
10
11    for i := range ch { // Iterate over values received from the channel
12        fmt.Println(i) // Print each value
13    }
14}

The code above prints 1 and 2 using a channel. This code merely demonstrates sending and receiving values through a channel. However, channels offer more extensive functionalities. Let us first explore buffered channels and unbuffered channels. It is important to note that the example provided above uses an unbuffered channel, meaning that the action of sending data to the channel and receiving data from it must occur simultaneously. If these actions do not happen concurrently, a deadlock may ensue.

buffered channel

What if the aforementioned code involved two processes performing heavy computations rather than simple output? If the second process, while reading and processing, experiences a prolonged hang, the first process would also be stalled for the same duration. To prevent such a scenario, we can employ a buffered channel.

 1package main
 2
 3func main() {
 4    ch := make(chan int, 2) // Create a buffered channel of integers with capacity 2
 5    go func() {
 6        ch <- 1 // Send 1 to the channel
 7        ch <- 2 // Send 2 to the channel
 8        close(ch) // Close the channel
 9    }()
10
11    for i := range ch { // Iterate over values received from the channel
12        fmt.Println(i) // Print each value
13    }
14}

The code above prints 1 and 2 using a buffered channel. This implementation utilizes a buffered channel such that the act of sending data to the channel and receiving data from it need not occur simultaneously. By introducing a buffer to the channel, a certain degree of leeway is created, which can prevent processing delays caused by the impact of subsequent tasks.

select

When managing multiple channels, the select statement facilitates the straightforward implementation of a fan-in structure.

 1package main
 2
 3import (
 4    "fmt"
 5    "time"
 6)
 7
 8func main() {
 9    ch1 := make(chan int, 10) // Create buffered channel 1
10    ch2 := make(chan int, 10) // Create buffered channel 2
11    ch3 := make(chan int, 10) // Create buffered channel 3
12
13    go func() {
14        for {
15            ch1 <- 1 // Send 1 to ch1
16            time.Sleep(1 * time.Second) // Wait for 1 second
17        }
18    }()
19    go func() {
20        for {
21            ch2 <- 2 // Send 2 to ch2
22            time.Sleep(2 * time.Second) // Wait for 2 seconds
23        }
24    }()
25    go func() {
26        for {
27            ch3 <- 3 // Send 3 to ch3
28            time.Sleep(3 * time.Second) // Wait for 3 seconds
29        }
30    }()
31
32    for i := 0; i < 3; i++ { // Loop 3 times
33        select {
34        case v := <-ch1: // Receive from ch1
35            fmt.Println(v) // Print the received value
36        case v := <-ch2: // Receive from ch2
37            fmt.Println(v) // Print the received value
38        case v := <-ch3: // Receive from ch3
39            fmt.Println(v) // Print the received value
40        }
41    }
42}

The code above creates three channels that periodically transmit 1, 2, and 3, respectively, and then uses select to receive and print values from these channels. By employing select in this manner, data can be simultaneously received from multiple channels and processed as soon as it arrives.

for range

A channel can readily receive data using a for range loop. When for range is applied to a channel, it executes each time data is added to the channel and terminates the loop when the channel is closed.

 1package main
 2
 3func main() {
 4    ch := make(chan int) // Create an unbuffered channel of integers
 5    go func() {
 6        ch <- 1 // Send 1 to the channel
 7        ch <- 2 // Send 2 to the channel
 8        close(ch) // Close the channel
 9    }()
10
11    for i := range ch { // Iterate over values received from the channel
12        fmt.Println(i) // Print each value
13    }
14}

The code above prints 1 and 2 using a channel. In this code, for range is utilized to receive and print data whenever it is added to the channel. The loop then terminates when the channel is closed.

As previously mentioned several times, this syntax can also be employed for simple synchronization.

 1package main
 2
 3func main() {
 4    ch := make(chan struct{}) // Create an unbuffered channel of empty structs
 5    go func() {
 6        defer close(ch) // Close the channel when the goroutine finishes
 7        time.Sleep(1 * time.Second) // Wait for 1 second
 8        fmt.Println("Hello, World!") // Print "Hello, World!"
 9    }()
10
11    fmt.Println("Waiting for goroutine...") // Print "Waiting for goroutine..."
12    for range ch {} // Wait for the channel to close
13}

The code above prints "Hello, World!" after a one-second pause. This code transforms synchronous code into asynchronous code using a channel. By utilizing a channel in this manner, synchronous code can be easily converted to asynchronous, and join points can be established.

etc

  1. Sending or receiving data from a nil channel can result in an infinite loop and a deadlock.
  2. Sending data to a channel after it has been closed will cause a panic.
  3. Channels do not necessarily need to be explicitly closed; the garbage collector will reclaim them.

mutex

spinlock

A spinlock is a synchronization method that repeatedly attempts to acquire a lock by spinning in a loop. In the Go language, spinlocks can be easily implemented using pointers.

 1package spinlock
 2
 3import (
 4    "runtime"
 5    "sync/atomic"
 6)
 7
 8type SpinLock struct {
 9    lock uintptr
10}
11
12func (s *SpinLock) Lock() {
13    // Continuously try to acquire the lock using CompareAndSwap
14    for !atomic.CompareAndSwapUintptr(&s.lock, 0, 1) {
15        // Yield the processor to other goroutines
16        runtime.Gosched()
17    }
18}
19
20func (s *SpinLock) Unlock() {
21    // Release the lock by setting its value to 0
22    atomic.StoreUintptr(&s.lock, 0)
23}
24
25func NewSpinLock() *SpinLock {
26    return &SpinLock{}
27}

The code above implements the spinlock package. This code utilizes the sync/atomic package to implement SpinLock. The Lock method attempts to acquire the lock using atomic.CompareAndSwapUintptr, and the Unlock method releases the lock using atomic.StoreUintptr. Because this approach continuously attempts to acquire the lock, it consumes CPU resources until the lock is obtained, potentially leading to an infinite loop. Therefore, spinlocks are best suited for simple synchronization or for situations where they are used for very short durations.

sync.Mutex

A mutex is a tool for synchronizing goroutines. The mutex provided by the sync package offers methods such as Lock, Unlock, RLock, and RUnlock. A mutex can be created using sync.Mutex, and a read/write lock can also be used with sync.RWMutex.

 1package main
 2
 3import (
 4    "sync"
 5)
 6
 7func main() {
 8    var mu sync.Mutex // Declare a mutex
 9    var count int // Declare an integer counter
10
11    go func() {
12        mu.Lock() // Acquire the lock
13        count++ // Increment the counter
14        mu.Unlock() // Release the lock
15    }()
16
17    mu.Lock() // Acquire the lock
18    count++ // Increment the counter
19    mu.Unlock() // Release the lock
20
21    println(count) // Print the final count
22}

In the code above, two goroutines will access the same count variable almost simultaneously. By employing a mutex to designate the code accessing the count variable as a critical section, concurrent access to count can be prevented. Consequently, this code will consistently output 2 regardless of how many times it is executed.

sync.RWMutex

sync.RWMutex is a mutex that allows for distinct read and write locks. The RLock and RUnlock methods can be used to acquire and release read locks, respectively.

 1package cmap
 2
 3import (
 4    "sync"
 5)
 6
 7type ConcurrentMap[K comparable, V any] struct {
 8    sync.RWMutex // Embed RWMutex for read/write locking
 9    data map[K]V // The underlying map
10}
11
12func (m *ConcurrentMap[K, V]) Get(key K) (V, bool) {
13    m.RLock() // Acquire a read lock
14    defer m.RUnlock() // Ensure the read lock is released
15
16    value, ok := m.data[key] // Access the map data
17    return value, ok
18}
19
20func (m *ConcurrentMap[K, V]) Set(key K, value V) {
21    m.Lock() // Acquire a write lock
22    defer m.Unlock() // Ensure the write lock is released
23
24    m.data[key] = value // Modify the map data
25}

The code above implements a ConcurrentMap using sync.RWMutex. In this code, a read lock is acquired in the Get method, and a write lock is acquired in the Set method, enabling safe access and modification of the data map. The necessity of a read lock arises because, in scenarios with numerous read operations, multiple goroutines can perform read operations concurrently by acquiring only a read lock, without needing a write lock. This approach enhances performance by utilizing read locks where state changes are not involved, thereby avoiding the overhead of write locks.

fakelock

fakelock is a simple technique that implements sync.Locker. This structure provides the same methods as sync.Mutex but performs no actual operation.

1package fakelock
2
3type FakeLock struct{} // Define an empty struct
4
5func (f *FakeLock) Lock() {} // Implement the Lock method (does nothing)
6
7func (f *FakeLock) Unlock() {} // Implement the Unlock method (does nothing)

The code above implements the fakelock package. This package implements sync.Locker by providing Lock and Unlock methods, which, however, perform no actual operation. The reason for the necessity of such code will be elaborated upon if an opportunity arises.

waitgroup

sync.WaitGroup

sync.WaitGroup is a mechanism for awaiting the completion of all goroutine operations. It provides Add, Done, and Wait methods. The Add method increments the counter for the number of goroutines, Done signals that a goroutine's operation has finished, and Wait blocks until all goroutine operations are complete.

 1package main
 2
 3import (
 4    "sync"
 5    "sync/atomic"
 6)
 7
 8func main() {
 9    wg := sync.WaitGroup{} // Declare a WaitGroup
10    c := atomic.Int64{} // Declare an atomic integer counter
11
12    for i := 0; i < 100 ; i++ { // Loop 100 times
13        wg.Add(1) // Increment the WaitGroup counter
14        go func() {
15            defer wg.Done() // Decrement the WaitGroup counter when the goroutine finishes
16            c.Add(1) // Atomically increment the counter
17        }()
18    }
19
20    wg.Wait() // Wait for all goroutines to complete
21    println(c.Load()) // Print the final atomic counter value
22}

The code above utilizes sync.WaitGroup to concurrently increment the value of variable c by 100 goroutines. In this code, sync.WaitGroup is used to wait until all goroutines have finished, after which the incremented value of c is printed. While channels alone may suffice for fork & join operations involving a few tasks, sync.WaitGroup presents an excellent alternative for managing numerous fork & join tasks.

with slice

When used in conjunction with slices, a waitgroup can serve as an effective tool for managing concurrent execution without the need for locks.

 1package main
 2
 3import (
 4	"fmt"
 5	"sync"
 6    "rand" // Assuming 'rand' is imported from a standard library, e.g., "math/rand"
 7)
 8
 9func main() {
10	var wg sync.WaitGroup // Declare a WaitGroup
11	arr := [10]int{} // Declare an array of 10 integers
12
13	for i := 0; i < 10; i++ { // Loop 10 times
14		wg.Add(1) // Increment the WaitGroup counter
15		go func(id int) { // Launch a goroutine for each iteration
16			defer wg.Done() // Decrement the WaitGroup counter when the goroutine finishes
17
18			arr[id] = rand.Intn(100) // Assign a random integer to the array at the given index
19		}(i) // Pass the loop variable 'i' as an argument to the goroutine
20	}
21
22	wg.Wait() // Wait for all goroutines to complete
23	fmt.Println("Done") // Print "Done"
24
25    for i, v := range arr { // Iterate over the array
26        fmt.Printf("arr[%d] = %d\n", i, v) // Print each element and its index
27    }
28}

The code above employs only a waitgroup to enable each goroutine to concurrently generate 10 random integers and store them at their assigned indices. In this code, waitgroup is used to wait until all goroutines have finished, after which "Done" is printed. By using waitgroup in this manner, multiple goroutines can perform tasks concurrently, store data without locks until all goroutines complete, and then perform batch post-processing.

golang.org/x/sync/errgroup.ErrGroup

errgroup is an extension of sync.WaitGroup. Unlike sync.WaitGroup, errgroup cancels all goroutines and returns an error if any single goroutine operation encounters an error.

 1package main
 2
 3import (
 4    "context"
 5    "fmt"
 6    "golang.org/x/sync/errgroup"
 7)
 8
 9func main() {
10    g, ctx := errgroup.WithContext(context.Background()) // Create an errgroup with a context
11    _ = ctx // Ignore the context for this example, but it's available for cancellation
12
13    for i := 0; i < 10; i++ { // Loop 10 times
14        i := i // Create a local copy of i for the goroutine
15        g.Go(func() error { // Launch a goroutine
16            if i == 5 { // If i is 5
17                return fmt.Errorf("error") // Return an error
18            }
19            return nil // Otherwise, return nil
20        })
21    }
22
23    if err := g.Wait(); err != nil { // Wait for all goroutines to complete and check for errors
24        fmt.Println(err) // Print any error encountered
25    }
26}

The code above uses errgroup to create 10 goroutines, with the fifth goroutine intentionally generating an error. This demonstrates a scenario where an error occurs. In practical applications, errgroup would be used to create goroutines, and various error handling procedures would be implemented for cases where errors arise in individual goroutines.

once

A tool for executing code that should only run once. The following constructors can be used to execute the relevant code.

1func OnceFunc(f func()) func()
2func OnceValue[T any](f func() T) func() T
3func OnceValues[T1, T2 any](f func() (T1, T2)) func() (T1, T2)

OnceFunc

OnceFunc ensures that the specified function is executed precisely once across the entire application's lifetime.

 1package main
 2
 3import "sync"
 4
 5func main() {
 6    once := sync.OnceFunc(func() { // Create a OnceFunc
 7        println("Hello, World!") // Function to be executed once
 8    })
 9
10    once() // Call the once function
11    once() // Call again (will not execute)
12    once() // Call again (will not execute)
13    once() // Call again (will not execute)
14    once() // Call again (will not execute)
15}

The code above prints "Hello, World!" using sync.OnceFunc. In this code, sync.OnceFunc is used to create the once function, and even if the once function is invoked multiple times, "Hello, World!" will be printed only once.

OnceValue

OnceValue not only ensures that the specified function executes precisely once but also stores the return value of that function, returning the stored value on subsequent calls.

 1package main
 2
 3import "sync"
 4
 5func main() {
 6    c := 0 // Initialize a counter
 7    once := sync.OnceValue(func() int { // Create a OnceValue function
 8        c += 1 // Increment the counter
 9        return c // Return the counter value
10    })
11
12    println(once()) // Call and print the result
13    println(once()) // Call again and print the stored result
14    println(once()) // Call again and print the stored result
15    println(once()) // Call again and print the stored result
16    println(once()) // Call again and print the stored result
17}

The code above increments the c variable by 1 using sync.OnceValue. In this code, sync.OnceValue is used to create the once function, and even if the once function is invoked multiple times, the c variable will have incremented only once, returning the value 1.

OnceValues

OnceValues operates identically to OnceValue but is capable of returning multiple values.

 1package main
 2
 3import "sync"
 4
 5func main() {
 6    c := 0 // Initialize a counter
 7    once := sync.OnceValues(func() (int, int) { // Create a OnceValues function
 8        c += 1 // Increment the counter
 9        return c, c // Return two values (both the counter value)
10    })
11
12    a, b := once() // Call and assign the returned values
13    println(a, b) // Print the values
14    a, b = once() // Call again and assign the stored values
15    println(a, b) // Print the values
16    a, b = once() // Call again and assign the stored values
17    println(a, b) // Print the values
18    a, b = once() // Call again and assign the stored values
19    println(a, b) // Print the values
20    a, b = once() // Call again and assign the stored values
21    println(a, b) // Print the values
22}

The code above increments the c variable by 1 using sync.OnceValues. In this code, sync.OnceValues is used to create the once function, and even if the once function is invoked multiple times, the c variable will have incremented only once, returning the value 1.

atomic

The atomic package provides atomic operations. While the atomic package offers methods such as Add, CompareAndSwap, Load, Store, and Swap, recent recommendations advise using types like Int64, Uint64, and Pointer.

 1package main
 2
 3import (
 4    "sync"
 5    "sync/atomic"
 6)
 7
 8func main() {
 9    wg := sync.WaitGroup{} // Declare a WaitGroup
10    c := atomic.Int64{} // Declare an atomic integer counter
11
12    for i := 0; i < 100 ; i++ { // Loop 100 times
13        wg.Add(1) // Increment the WaitGroup counter
14        go func() {
15            defer wg.Done() // Decrement the WaitGroup counter when the goroutine finishes
16            c.Add(1) // Atomically increment the counter
17        }()
18    }
19
20    wg.Wait() // Wait for all goroutines to complete
21    println(c.Load()) // Print the final atomic counter value
22}

This is the previously used example. It is code that atomically increments the c variable using the atomic.Int64 type. The Add and Load methods can atomically increment and read the variable, respectively. Furthermore, the Store method can save a value, the Swap method can exchange values, and the CompareAndSwap method can compare values and replace them if suitable.

cond

sync.Cond

The cond package provides condition variables. A cond package can be created using sync.Cond, and it offers Wait, Signal, and Broadcast methods.

 1package main
 2
 3import (
 4    "sync"
 5)
 6
 7func main() {
 8    c := sync.NewCond(&sync.Mutex{}) // Create a new condition variable with a mutex
 9    ready := false // Flag to indicate readiness
10
11    go func() {
12        c.L.Lock() // Acquire the mutex associated with the condition variable
13        ready = true // Set the ready flag to true
14        c.Signal() // Signal one waiting goroutine
15        c.L.Unlock() // Release the mutex
16    }()
17
18    c.L.Lock() // Acquire the mutex
19    for !ready { // Loop while not ready
20        c.Wait() // Wait for a signal, releasing the mutex temporarily
21    }
22    c.L.Unlock() // Release the mutex
23
24    println("Ready!") // Print "Ready!"
25}

The code above uses sync.Cond to wait until the ready variable becomes true. In this code, sync.Cond is employed to wait until the ready variable is true, after which "Ready!" is printed. By using sync.Cond in this manner, multiple goroutines can be made to wait concurrently until a specific condition is met.

This can be utilized to implement a simple queue.

 1package queue
 2
 3import (
 4    "sync"
 5    "sync/atomic"
 6)
 7
 8type Node[T any] struct {
 9    Value T // Value stored in the node
10    Next  *Node[T] // Pointer to the next node
11}
12
13type Queue[T any] struct {
14    sync.Mutex // Mutex to protect queue operations
15    Cond *sync.Cond // Condition variable for signaling/waiting
16    Head *Node[T] // Pointer to the head of the queue
17    Tail *Node[T] // Pointer to the tail of the queue
18    Len  int // Current length of the queue
19}
20
21func New[T any]() *Queue[T] {
22    q := &Queue[T]{} // Create a new queue instance
23    q.Cond = sync.NewCond(&q.Mutex) // Initialize the condition variable with the queue's mutex
24    return q
25}
26
27func (q *Queue[T]) Push(value T) {
28    q.Lock() // Acquire the mutex
29    defer q.Unlock() // Ensure the mutex is released
30
31    node := &Node[T]{Value: value} // Create a new node
32    if q.Len == 0 { // If the queue is empty
33        q.Head = node // Set head and tail to the new node
34        q.Tail = node
35    } else { // If the queue is not empty
36        q.Tail.Next = node // Append the new node to the tail
37        q.Tail = node // Update the tail
38    }
39    q.Len++ // Increment the queue length
40    q.Cond.Signal() // Signal one waiting goroutine that an item is available
41}
42
43func (q *Queue[T]) Pop() T {
44    q.Lock() // Acquire the mutex
45    defer q.Unlock() // Ensure the mutex is released
46
47    for q.Len == 0 { // Loop while the queue is empty
48        q.Cond.Wait() // Wait for a signal, releasing the mutex temporarily
49    }
50
51    node := q.Head // Get the head node
52    q.Head = q.Head.Next // Move the head to the next node
53    q.Len-- // Decrement the queue length
54    return node.Value // Return the value of the popped node
55}

By leveraging sync.Cond in this manner, instead of consuming significant CPU resources with spin-locks, one can efficiently wait and resume operations when conditions are met.

semaphore

golang.org/x/sync/semaphore.Semaphore

The semaphore package provides semaphores. A semaphore package can be created using golang.org/x/sync/semaphore.Semaphore, and it offers Acquire, Release, and TryAcquire methods.

 1package main
 2
 3import (
 4    "context"
 5    "fmt"
 6    "golang.org/x/sync/semaphore"
 7)
 8
 9func main() {
10    s := semaphore.NewWeighted(1) // Create a new weighted semaphore with a weight of 1
11
12    if s.TryAcquire(1) { // Attempt to acquire a weight of 1
13        fmt.Println("Acquired!") // Print "Acquired!" if successful
14    } else {
15        fmt.Println("Not Acquired!") // Print "Not Acquired!" if unsuccessful
16    }
17
18    s.Release(1) // Release a weight of 1
19}

The code above creates a semaphore using the semaphore package, acquires the semaphore using the Acquire method, and releases it using the Release method. This code demonstrates how to acquire and release a semaphore using the semaphore package.

Conclusion

This should cover the fundamental concepts. Based on the contents of this article, I hope you gain an understanding of how to manage concurrency using goroutines and are able to apply these methods in practice. I trust this article has been beneficial to you. Thank you.