librelist archives

« back to archive

using sidekiq to increment page views

using sidekiq to increment page views

From:
Fabrizio Regini
Date:
2014-09-04 @ 08:59
My application does, among other things, increment a counter each time a
page view happen.
I save daily aggregated data in posgres and I'm currently doing that in a
Sidekiq worker to offload the web process.

Right now I'm doing that by saving a record in the database during the web
transaction and passing the new record id to the sidekiq worker to perform
the aggregation. The record is then deleted when the aggregation is done.
This, in my mind, saves me from the same job being processed twice.

I think this is overkill and due to my lack of understanding of how the
whole thing works. I realize now this defensive approach does not save me
from potentially save a page view twice, due to race conditions and how an
upsert in postgres is performed. Counter may be incremented twice before
the record is is deleted, after all.

Is this a suitable use case for sidekiq? Is anyone having the same concerns
on this or similar use cases?

-f

Re: [sidekiq] using sidekiq to increment page views

From:
Matthijs Langenberg
Date:
2014-09-04 @ 09:19
Isn't using Redis directly far more suitable for your needs?

There is plenty to choose from, depending on needs and amount of users:
using incr, using a bitmask or even HyperLogLog.

Sidekiq job for aggregations seems okay, but I doubt doing an aggregation
will take more than a fraction of a second.


On Thu, Sep 4, 2014 at 10:59 AM, Fabrizio Regini <freegenie@gmail.com>
wrote:

> My application does, among other things, increment a counter each time a
> page view happen.
> I save daily aggregated data in posgres and I'm currently doing that in a
> Sidekiq worker to offload the web process.
>
> Right now I'm doing that by saving a record in the database during the web
> transaction and passing the new record id to the sidekiq worker to perform
> the aggregation. The record is then deleted when the aggregation is done.
> This, in my mind, saves me from the same job being processed twice.
>
> I think this is overkill and due to my lack of understanding of how the
> whole thing works. I realize now this defensive approach does not save me
> from potentially save a page view twice, due to race conditions and how an
> upsert in postgres is performed. Counter may be incremented twice before
> the record is is deleted, after all.
>
> Is this a suitable use case for sidekiq? Is anyone having the same
> concerns on this or similar use cases?
>
> -f
>

Re: [sidekiq] using sidekiq to increment page views

From:
Fabrizio Regini
Date:
2014-09-04 @ 09:31
Thanks for the advice, I'll have a look at those other solutions. Using a
backgroud job seems a good idea to me because there may be other stuff
going on after data is aggregated, like sending emails and so on, and the
page hit is very high and I want to offload as much database queries as
possible.



On Thu, Sep 4, 2014 at 11:19 AM, Matthijs Langenberg <mlangenberg@gmail.com>
wrote:

> Isn't using Redis directly far more suitable for your needs?
>
> There is plenty to choose from, depending on needs and amount of users:
> using incr, using a bitmask or even HyperLogLog.
>
> Sidekiq job for aggregations seems okay, but I doubt doing an aggregation
> will take more than a fraction of a second.
>
>
> On Thu, Sep 4, 2014 at 10:59 AM, Fabrizio Regini <freegenie@gmail.com>
> wrote:
>
>> My application does, among other things, increment a counter each time a
>> page view happen.
>> I save daily aggregated data in posgres and I'm currently doing that in a
>> Sidekiq worker to offload the web process.
>>
>> Right now I'm doing that by saving a record in the database during the
>> web transaction and passing the new record id to the sidekiq worker to
>> perform the aggregation. The record is then deleted when the aggregation is
>> done. This, in my mind, saves me from the same job being processed twice.
>>
>> I think this is overkill and due to my lack of understanding of how the
>> whole thing works. I realize now this defensive approach does not save me
>> from potentially save a page view twice, due to race conditions and how an
>> upsert in postgres is performed. Counter may be incremented twice before
>> the record is is deleted, after all.
>>
>> Is this a suitable use case for sidekiq? Is anyone having the same
>> concerns on this or similar use cases?
>>
>> -f
>>
>
>

Re: [sidekiq] using sidekiq to increment page views

From:
Karl Baum
Date:
2014-09-05 @ 13:26
I would recommend doing this in the foreground with redis.incr(key_name). 
 This will be faster than actually enqueuing a job in sidekiq which uses 
redis anyway.
—
Sent from Mailbox

On Thu, Sep 4, 2014 at 5:00 AM, Fabrizio Regini <freegenie@gmail.com>
wrote:

> My application does, among other things, increment a counter each time a
> page view happen.
> I save daily aggregated data in posgres and I'm currently doing that in a
> Sidekiq worker to offload the web process.
> Right now I'm doing that by saving a record in the database during the web
> transaction and passing the new record id to the sidekiq worker to perform
> the aggregation. The record is then deleted when the aggregation is done.
> This, in my mind, saves me from the same job being processed twice.
> I think this is overkill and due to my lack of understanding of how the
> whole thing works. I realize now this defensive approach does not save me
> from potentially save a page view twice, due to race conditions and how an
> upsert in postgres is performed. Counter may be incremented twice before
> the record is is deleted, after all.
> Is this a suitable use case for sidekiq? Is anyone having the same concerns
> on this or similar use cases?
> -f