Skip to content

Commit 90e8900

Browse files
fixup! feat(devmanual): DB clusters and read/write split
Signed-off-by: Christoph Wurst <christoph@winzerhof-wurst.at>
1 parent 7815675 commit 90e8900

File tree

1 file changed

+6
-4
lines changed

1 file changed

+6
-4
lines changed

developer_manual/digging_deeper/performance.rst

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -84,22 +84,24 @@ A common pattern that works fine with small databases but falls apart on overloa
8484

8585
There are two patterns to avoid the "dirty" read:
8686

87-
1. **Wrap the write+read operation in a transaction**. Nextcloud's read/write split, but also other database cluster load balancers will ensure that the queries of a transactions go to one single database node of a cluster. That ensures that data written is instantly available to be read back. This approach guarantees consistency, but puts additional load on the primary node because it has to execute the read operation.
87+
1. **Wrap the write+read operation in a transaction**. Nextcloud's read/write split, but also other database cluster load balancers will ensure that the queries of a transactions go to one single database node of a cluster. That ensures that data written is instantly available to be read back. This approach guarantees consistency, but puts additional load on the primary node because it has to execute the read operation too. This is best used in contained code blocks. Do not span transactions for event listeners because their execution might lead to :ref:`long transactions<performance-long-transactions>` and locking issues.
8888
2. **Avoid the read operation**. If the code allows it, avoid the read operation all together. You should know what was just written. If you need the auto increment ID, use the database's *last insert ID* feature. Proceed with this data, pass it to event listeners, etc. This approach guarantees consistency, too, but also improves overall performance.
8989

90-
.. note::
90+
.. tip::
9191
Nextcloud can help you identify read after write without the need to set up a cluster for your development environment. If you change the loglevel to 0 (debug), dirty reads will trigger a log entry. Monitor the log when testing your code.
9292

9393
Look out for messages like ``dirty table reads: SELECT `id` FROM `*PREFIX*jobs` WHERE (`class` = :dcValue1) AND (`argument_hash` = :dcValue2) LIMIT 1``. Use the log entry's *trace* to locate the code that executed the query.
9494

95-
95+
Be aware that the dirty read detection is not perfect and might wrongly log a dirty read when you write and read unrelated data. As an example, you may read user *alice*, update her data, and then read *bob*'s data and do the same. Even if the database replicates slow, you will not read data that doesn't exist yet. Since Nextcloud tracks on a table level, it still warns.
96+
97+
.. _performance-long-transactions:
9698

9799
Long transactions
98100
~~~~~~~~~~~~~~~~~
99101

100102
Transactions are crucial for changes that belong together but they can cause problems under load. That's because the longer the transaction is open, the more other queries may have to wait for a lock to be released. This can lead to contention, timing out requests and deadlocks. So use transaction wisely and try to keep them as short as possible. Don't mix them database operations with file system operations, for example.
101103

102-
.. note::
104+
.. tip::
103105
Nextcloud can help you identify slow transactions. If you change the loglevel to 0 (debug), slow transaction will cause a log message at commit/rollback.
104106

105107
Look out for messages like ``Transaction took longer than 1s: 7.1270351409912`` and ``Transaction rollback took longer than 1s: 1.2153599501``.

0 commit comments

Comments
 (0)