![pg postgres app rails pg postgres app rails](https://i.stack.imgur.com/I5eC9.png)
![pg postgres app rails pg postgres app rails](https://miro.medium.com/max/1280/1*WIP-Mm0FPdfMW77V0lEc5Q.png)
![pg postgres app rails pg postgres app rails](https://nickjanetakis.com/assets/blog/cards/dockerize-a-rails-5-postgres-redis-sidekiq-and-action-cable-application-with-docker-compose-ae67c5862f394e1a579da91556335810cf90976fd8e365f1605563b0ada2192d.jpg)
Satisfied with my work, I was surprised to wake up to the myriad of log errors and complaints from colleagues. Simple and elegant! I’ve run everything locally and in the staging environment, found no problems and deployed to production. delete_all # waits while all imports will be finished The same is true when you fetch the records with SELECT FOR UPDATE.Įxactly our case! I used a single database transaction to wrap complex import tasks for individual products and to lock task records from being changed or deleted: It is good to keep in mind that in most RDBMS records being updated inside a transaction will be locked and unmodifiable by other processes until the transaction is finished. Transaction locks to the rescue!įor anyone dealing with relational databases often, the answer is clear: use transactions! According to a description in Rails docs, they are “ protective blocks where SQL statements are only permanent if they can all succeed as one atomic action.” As documentation states, “ you should use transaction blocks whenever you have a number of statements that must be executed together or not at all.” Nothing wrong with the design, but it led to confusing user experience, so we needed to address it in two possible ways: either somehow identify and cancel jobs already in progress, or wait for the last imports to finish before confirming that the whole process is in fact “canceled.” I chose the latter. The reason for that was clear: as background jobs took up to a minute to finish, even a canceled import had an “afterglow.” However, the next time the page is refreshed, the list has more records than shown initially”. A common bug description stated: “After canceling an import, a user is presented with a list of imported records. Our design seemed simple and reliable, but it did not always work exactly as planned. Interrupted import or not-the absence of temporary records meant we were done. If by the time the background job started, the record was not found (when a user cancels the import, all temporary records are deleted)-the job just did nothing and exited silently. Then for each of those records, a background job was spawned that pulled in all external information, persisted it in the right places (by creating associations, if necessary), and finally deleted a temporary record. Here’s how our “interruptable import” was implemented: at the start of an import a temporary record in a separate database table was created for each enqueued product. As a user may get tired of waiting and hit the “Cancel” button at any moment, the application should still be usable with only those records that made it. It is not uncommon for a user to import hundreds of products, and each product has dependencies that need to be fetched too, so you can imagine that the whole process takes a while (30 to 60 seconds for each product). Each product brought in even more associated data from external APIs. I was working on a project where users could import a bunch of heavy entities (let’s call them products) from an external service into our application. Learn how to use transactions properly, and what to do when using them is not an option. Here comes a tale on why you should never silence errors inside database transactions.