Skip to content

Conversation

@JanKrivanek
Copy link

Enable consumers to configure whether they want to receive exceptions on exhausted Ids or be blocked until new timeslot is available

@JanKrivanek
Copy link
Author

@RobThree Did you had any chance to look at this one? Do you have any opinions? (I'm just wondering if there is anything I can change or if you believe the idea itself is flawed).
Thank you
Jan

@RobThree
Copy link
Owner

RobThree commented Jun 29, 2020

Hmm, this issue has escaped my attention. Sorry about that.

Have you seen #21 ?

I'm not a big fan of spinlocking. I don't think the idea is flawed and spinlocks do have their use but I'm not sure I agree with the implementation. IF we'd implement spinlocking (and I have to think this thru, I'm just thinking aloud), I guess I'd implement a Spinlocking ITimeSource that does the spinlocking. << That won't work obviously since the timesource has no idea when the sequence overflows. Let me sit on it for a bit.

@JanKrivanek
Copy link
Author

Thanks for getting back to this!
I've quickly run through #21 - nice long and deep discussion :-)
Without being philosophical - such an opt-in feature would be very helpful for us and I'd be happy to add that in whatever form you'd agree on (being it a flag as suggested now, or injectable IExhaustedTimeSlotHandler, or more involved waiting that gets passive if longer, etc.). Implementing the behavior in consuming code via TryCreateId or catching the SequenceOverflowException is killing the simplicity and convenience of IdGen (enumerator etc.) for which we chosen to use it.
We're still in pre-production phase, so no big harm in changing library or moving to our own implementation. So I don't want to force my personal idea (as anyway I see you'd be much stronger oponent in such a debate :-)) But again - if that wouldn't be against your intended philosphy, but you'd have any specific opinion on the way of offering this behavior - I'd be happy to create a PR.

Thanks in any case!
Jan

@RobThree
Copy link
Owner

RobThree commented Jun 29, 2020

Just, out of curiosity: how often do you (expect to) run into a sequenceoverflow and can't you mitigate it by adjusting the MaskConfig? If you're running into a sequenceoverflow that is kind of a sign you need more bits for the sequence. Yes, spinwaiting is an option, and I'll consider how to implement it, but the better option, in my humble opinion, is to 'sacrifice' one or a few generator bits (whenever possible ofcourse). Each bit added to the sequence part doubles the amount of id's you can generate within a tick at the cost of halving the number of generators. Do you really need 1024 generators? Adding one or a few bits to the sequence part (again, whenever possible) means you don't even have to (spin)wait and waste precious CPU cycles and just generate another id.

Again, just asking, curiosity.

@JanKrivanek
Copy link
Author

Correct - proper bits allocation is allways a key.
We definitely need a room for hundreds of generators (calculating the number of services we have now + how we plan to split those + projecting the planned scale-out tactic), very occasionaly we can get request for several thousands ids at once, while such request processing is not super time critical. And we want time slots by millisecond (as there will be times with bursts of small requests).

Generally: Spinning has the advantage of not seeing the errors later down the road in the production, but of course there can be scenarios where it can silently cripple the calling code. So I fully understand your hesitance. If it would be one or the other, then exceptions are likely better ('fail fast').

I'm maybe thinking about injectable logic handling the sequnces overflow (then by default the code can throw allways, but one would have an option to wait but lets say throw if blocking longer or more often).

I'm really not sure :-) The more I think about it, the more I start to think we should just wrap the try method in our code and move enumerating and waiting there :-)

@RobThree
Copy link
Owner

I'm maybe thinking about injectable logic handling the sequnces overflow (then by default the code can throw allways, but one would have an option to wait but lets say throw if blocking longer or more often).

I was thinking something similar. I'm thinking along the lines of an ISequenceOverflowHandler but I'm still pondering how that would work and how I could design that in a clean and compatible way.

I'm really not sure :-) The more I think about it, the more I start to think we should just wrap the try method in our code and move enumerating and waiting there :-)

If you're in a hurry then go ahead. But if you're not, then let me think about this for a bit longer and I'm sure we can work something out.

RobThree added a commit that referenced this pull request Jul 2, 2020
…eSource and new MockAutoIncrementingIntervalTimeSource to new namespace. Added spinwait to configuration. Added a bunch of constructor overloads. Upped versions.
@RobThree
Copy link
Owner

RobThree commented Jul 2, 2020

Hi @jakrivan ,

I have implemented the spinwaiting (see dd03e48) but took a little bit of a different approach.

  1. I want the IdGenerator to be read-only (e.g. configure once during construction, no changes allowed after) so I made the UseSpinWait property readonly and added an (optional) useSpinWait argument to the constructor overloads (kind of exploding the constructor overloads, but this way we'll keep compatibility - I plan on moving to a 'GeneratorConfigclass someday getting rid of all but 1 constructor for theIdGenerator`.
  2. Instead of duplicating code I opted for a recursive call on a sequence overflow.
  3. I think I improved the unittest a bit for clarity

Besides that I also added the spinwait option to the configuration package (so now you can add a useSpinWait attribute in your XML to configure the generator, updated the documentation and upped the version of both updated packages (and documentation).

I have published a new release and NuGet package.

I hope you don't mind I haven't used your PR.

@JanKrivanek
Copy link
Author

Thanks @RobThree for the prompt implementation and release of this! I appreciate your cooperation and efforts! And definitely no hard feelings about not using this PR ;-)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

2 participants