Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 7 additions & 7 deletions docs/Profiling_v1.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
Profiling
===

StackExchange.Redis exposes a handful of methods and types to enable performance profiling. Due to its asynchronous and multiplexing
StackExchange.Redis exposes a handful of methods and types to enable performance profiling. Due to its asynchronous and multiplexing
behavior profiling is a somewhat complicated topic.

Interfaces
Expand All @@ -10,7 +10,7 @@ Interfaces
The profiling interface is composed of `IProfiler`, `ConnectionMultiplexer.RegisterProfiler(IProfiler)`, `ConnectionMultiplexer.BeginProfiling(object)`,
`ConnectionMultiplexer.FinishProfiling(object)`, and `IProfiledCommand`.

You register a single `IProfiler` with a `ConnectionMultiplexer` instance, it cannot be changed. You begin profiling for a given context (ie. Thread,
You register a single `IProfiler` with a `ConnectionMultiplexer` instance, it cannot be changed. You begin profiling for a given context (ie. Thread,
Http Request, and so on) by calling `BeginProfiling(object)`, and finish by calling `FinishProfiling(object)`. `FinishProfiling(object)` returns
a collection of `IProfiledCommand`s which contain timing information for all commands sent to redis by the configured `ConnectionMultiplexer` between
the `(Begin|Finish)Profiling` calls with the given context.
Expand All @@ -20,7 +20,7 @@ What "context" object should be used is application specific.
Available Timings
---

StackExchange.Redis exposes information about:
StackExchange.Redis exposes information about:

- The redis server involved
- The redis DB being queried
Expand Down Expand Up @@ -144,12 +144,12 @@ for (var i = 0; i < 16; i++)

threads.Add(thread);
}

threads.ForEach(thread => thread.Start());
threads.ForEach(thread => thread.Join());
```

`perThreadTimings` would end up with 16 entries of 1,000 `IProfilingCommand`s, keyed by the `Thread` the issued them.
`perThreadTimings` would end up with 16 entries of 1,000 `IProfilingCommand`s, keyed by the `Thread` that issued them.

Moving away from toy examples, here's how you can profile StackExchange.Redis in an MVC5 application.

Expand Down Expand Up @@ -199,10 +199,10 @@ protected void Application_EndRequest()
if (ctxObj != null)
{
var timings = RedisConnection.FinishProfiling(ctxObj);

// do what you will with `timings` here
}
}
```

This implementation will group all redis commands, including `async/await`-ed ones, with the http request that initiated them.
This implementation will group all redis commands, including `async/await`-ed ones, with the http request that initiated them.
17 changes: 7 additions & 10 deletions docs/Profiling_v2.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
Profiling
===

StackExchange.Redis exposes a handful of methods and types to enable performance profiling. Due to its asynchronous and multiplexing
StackExchange.Redis exposes a handful of methods and types to enable performance profiling. Due to its asynchronous and multiplexing
behavior profiling is a somewhat complicated topic.

Interfaces
Expand All @@ -15,12 +15,10 @@ the library invokes this callback, and *if* a non-null session is returned: oper
a particular profiling sesssion returns a collection of `IProfiledCommand`s which contain timing information for all commands sent to redis by the
configured `ConnectionMultiplexer`. It is the callback's responsibility to maintain any state required to track individual sessions.



Available Timings
---

StackExchange.Redis exposes information about:
StackExchange.Redis exposes information about:

- The redis server involved
- The redis DB being queried
Expand All @@ -36,14 +34,13 @@ StackExchange.Redis exposes information about:

`TimeSpan`s are high resolution, if supported by the runtime. `DateTime`s are only as precise as `DateTime.UtcNow`.


Example profilers
---

Due to StackExchange.Redis's asynchronous interface, profiling requires outside assistance to group related commands together.
This is achieved by providing the desired `ProfilingSession` object via the callback, and (later) calling `FinishProfiling()` on that session.

Probably the most useful general-purpose session-provider is one that provides session automatically and works between `async` calls; this is simply:
Probably the most useful general-purpose session-provider is one that provides sessions automatically and works between `async` calls; this is simply:

```csharp
class AsyncLocalProfiler
Expand Down Expand Up @@ -168,12 +165,12 @@ for (var i = 0; i < 16; i++)
);
threads.Add(thread);
}

threads.ForEach(thread => thread.Start());
threads.ForEach(thread => thread.Join());
```

`perThreadTimings` would end up with 16 entries of 1,000 `IProfilingCommand`s, keyed by the `Thread` the issued them.
`perThreadTimings` would end up with 16 entries of 1,000 `IProfilingCommand`s, keyed by the `Thread` that issued them.

Moving away from toy examples, here's how you can profile StackExchange.Redis in an MVC5 application.

Expand Down Expand Up @@ -217,7 +214,7 @@ protected void Application_EndRequest()
if (session != null)
{
var timings = session.FinishProfiling();

// do what you will with `timings` here
}
}
Expand All @@ -229,4 +226,4 @@ and ensure that the connection has the profiler registered when the connection i
connection.RegisterProfiler(() => _redisProfiler.GetSession());
```

This implementation will group all redis commands, including `async/await`-ed ones, with the http request that initiated them.
This implementation will group all redis commands, including `async/await`-ed ones, with the http request that initiated them.
22 changes: 9 additions & 13 deletions docs/Streams.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ var values = new NameValueEntry[]
{
new NameValueEntry("sensor_id", "1234"),
new NameValueEntry("temp", "19.8")
};
};

var db = redis.GetDatabase();
var messageId = db.StreamAdd("sensor_stream", values);
Expand Down Expand Up @@ -63,7 +63,7 @@ Console.WriteLine($"Length = {streams.First().Entries.Length}");

You can limit the number of messages returned per stream by using the `countPerStream` optional parameter.

The `StreamRange` method allows you to return a range of entries within a stream.
The `StreamRange` method allows you to return a range of entries within a stream.

```csharp
var messages = db.StreamRange("event_stream", minId: "-", maxId: "+");
Expand All @@ -72,9 +72,9 @@ var messages = db.StreamRange("event_stream", minId: "-", maxId: "+");
The `"-"` and `"+"` special characters indicate the smallest and greatest IDs possible. These values are the default values that will be used if no value is passed for the respective parameter. You also have the option to read the stream in reverse by using the `messageOrder` parameter. The `StreamRange` method also provides the ability to limit the number of entries returned by using the `count` parameter.

```csharp
var messages = db.StreamRange("event_stream",
minId: "0-0",
maxId: "+",
var messages = db.StreamRange("event_stream",
minId: "0-0",
maxId: "+",
count: 100,
messageOrder: Order.Descending);
```
Expand Down Expand Up @@ -168,10 +168,10 @@ The `StreamClaim` method can be used to change ownership of messages consumed by

```csharp
// Change ownership to consumer_2 for the first 5 messages pending for consumer_1.
var pendingMessages = db.StreamPendingMessages("events_stream",
"events_cg",
count: 5,
consumerName: "consumer_1",
var pendingMessages = db.StreamPendingMessages("events_stream",
"events_cg",
count: 5,
consumerName: "consumer_1",
minId: "0-0");

db.StreamClaim("events_stream",
Expand All @@ -182,7 +182,3 @@ db.StreamClaim("events_stream",
```

There are several other methods used to process streams using consumer groups. Please reference the Streams unit tests for those methods and how they are used.




4 changes: 2 additions & 2 deletions docs/ThreadTheft.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ on whether you have a `SynchronizationContext`. If you *don't* (common for conso
services, etc), then the TPL uses the standard thread-pool mechanisms to schedule the
continuation. If you *do* have a `SynchronizationContext` (common in UI applications
and web-servers), then its `Post` method is used instead; the `Post` method is *meant* to
be an asynchronous dispatch API. But... not all impementations are equal. Some
be an asynchronous dispatch API. But... not all implementations are equal. Some
`SynchronizationContext` implementations treat `Post` as a synchronous invoke. This is true
in particular of `LegacyAspNetSynchronizationContext`, which is what you get if you
configure ASP.NET with:
Expand All @@ -58,4 +58,4 @@ both appropriate and necessary, and does not represent additional overhead.

The library will attempt to detect `LegacyAspNetSynchronizationContext` in particular,
but this is not always reliable. The flag is also available for manual use with other
similar scenarios.
similar scenarios.
23 changes: 10 additions & 13 deletions docs/Timeouts.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,16 +3,15 @@
Verify what's the maximum bandwidth supported on your client and on the server where redis-server is hosted. If there are requests that are getting bound by bandwidth, it will take longer for them to complete and thereby can cause timeouts.
Similarly, verify you are not getting CPU bound on client or on the server box which would cause requests to be waiting for CPU time and thereby have timeouts.

Are there commands taking long time to process on the redis-server?
Are there commands taking a long time to process on the redis-server?
---------------
There can be commands that are taking long time to process on the redis-server causing the request to timeout. Few examples of long running commands are mget with large number of keys, keys * or poorly written lua script. You can run the SlowLog command to see if there are requests taking longer than expected. More details regarding the command can be found [here](https://redis.io/commands/slowlog).
There can be commands that are taking a long time to process on the redis-server causing the request to timeout. Few examples of long running commands are mget with large number of keys, keys * or poorly written lua script. You can run the SlowLog command to see if there are requests taking longer than expected. More details regarding the command can be found [here](https://redis.io/commands/slowlog).

Was there a big request preceding several small requests to the Redis that timed out?
---------------
The parameter “`qs`” in the error message tells you how many requests were sent from the client to the server, but have not yet processed a response. For some types of load you might see that this value keeps growing, because StackExchange.Redis uses a single TCP connection and can only read one response at a time. Even though the first operation timed out, it does not stop the data being sent to/from the server, and other requests are blocked until this is finished. Thereby, causing timeouts. One solution is to minimize the chance of timeouts by ensuring that your redis-server cache is large enough for your workload and splitting large values into smaller chunks. Another possible solution is to use a pool of ConnectionMultiplexer objects in your client, and choose the "least loaded" ConnectionMultiplexer when sending a new request. This should prevent a single timeout from causing other requests to also timeout.


Are you seeing high number of busyio or busyworker threads in the timeout exception?
Are you seeing a high number of busyio or busyworker threads in the timeout exception?
---------------

Asynchronous operations in StackExchange.Redis can come back in 3 different ways:
Expand All @@ -21,13 +20,11 @@ Asynchronous operations in StackExchange.Redis can come back in 3 different ways
- From 2.0 onwards, StackExchange.Redis maintains a dedicated thread-pool that it uses for completing most `async` operations; the error message may include an indication of how many of these workers are currently available - if this is zero, it may suggest that your system is particularly busy with asynchronous operations.
- .NET also has a global thread-pool; if the dedicated thread-pool is failing to keep up, additional work will be offered to the global thread-pool, so the message may include details of the global thread-pool.



The StackExchange.Redis dedicated thread-pool has a fixed size suitable for many common scenarios, which is shared between multiple connection instances (this can be customized by explicitly providing a `SocketManager` when creating a `ConnectionMultiplexer`). In many scenarios when using 2.0 and above, the vast majority of asynchronous operations will be services by this dedicated pool. This pool exists to avoid contention, as we've frequently seen cases where the global thread-pool becomes jammed with threads that need redis results to unblock them.

.NET itself provides new global thread pool worker threads or I/O completion threads on demand (without any throttling) until it reaches the "Minimum" setting for each type of thread. By default, the minimum number of threads is set to the number of processors on a system.
.NET itself provides new global thread pool worker threads or I/O completion threads on demand (without any throttling) until it reaches the "Minimum" setting for each type of thread. By default, the minimum number of threads is set to the number of processors on a system.

For these .NET-provided global thread pools: once the number of existing (busy) threads hits the "minimum" number of threads, the ThreadPool will throttle the rate at which is injects new threads to one thread per 500 milliseconds. This means that if your system gets a burst of work needing an IOCP thread, it will process that work very quickly. However, if the burst of work is more than the configured "Minimum" setting, there will be some delay in processing some of the work as the ThreadPool waits for one of two things to happen:
For these .NET-provided global thread pools: once the number of existing (busy) threads hits the "minimum" number of threads, the ThreadPool will throttle the rate at which it injects new threads to one thread per 500 milliseconds. This means that if your system gets a burst of work needing an IOCP thread, it will process that work very quickly. However, if the burst of work is more than the configured "Minimum" setting, there will be some delay in processing some of the work as the ThreadPool waits for one of two things to happen:
1. An existing thread becomes free to process the work
2. No existing thread becomes free for 500ms, so a new thread is created.

Expand All @@ -36,16 +33,16 @@ Basically, *if* you're hitting the global thread pool (rather than the dedicated
If we look at an example error message from StackExchange.Redis 2.0, you will see that it now prints ThreadPool statistics (see IOCP and WORKER details below).

Timeout performing GET MyKey (1000ms), inst: 2, qs: 6, in: 0, mgr: 9 of 10 available,
IOCP: (Busy=6,Free=994,Min=4,Max=1000),
IOCP: (Busy=6,Free=994,Min=4,Max=1000),
WORKER: (Busy=3,Free=997,Min=4,Max=1000)

In the above example, there are 6 operations currently awaiting replies from redis ("`qs`"), there are 0 bytes waiting to be read from the input stream from redis ("`in`"), and the dedicated thread-pool is almost fully available to service asynchronous completions ("`mgr`"). You can also see that for IOCP thread there are 6 busy threads and the system is configured to allow 4 minimum threads.
In the above example, there are 6 operations currently awaiting replies from redis ("`qs`"), there are 0 bytes waiting to be read from the input stream from redis ("`in`"), and the dedicated thread-pool is almost fully available to service asynchronous completions ("`mgr`"). You can also see that for IOCP thread there are 6 busy threads and the system is configured to allow 4 minimum threads.

In 1.*, the information is similar but slightly different:

System.TimeoutException: Timeout performing GET MyKey, inst: 2, mgr: Inactive,
queue: 6, qu: 0, qs: 6, qc: 0, wr: 0, wq: 0, in: 0, ar: 0,
IOCP: (Busy=6,Free=994,Min=4,Max=1000),
System.TimeoutException: Timeout performing GET MyKey, inst: 2, mgr: Inactive,
queue: 6, qu: 0, qs: 6, qc: 0, wr: 0, wq: 0, in: 0, ar: 0,
IOCP: (Busy=6,Free=994,Min=4,Max=1000),
WORKER: (Busy=3,Free=997,Min=4,Max=1000)

It may seem contradictory that there are *less* numbers in 2.0 - this is because the 2.0 code has been redesigned not to require some additional steps.
Expand Down