Lazy way to convert floating point to integer

I ported floating point code over to FlexBasic and the execution was more than acceptable but knowing that integer equivalents execute ~five times faster on the P2, it just niggled me.
So I just dumped the whole thing into Gemini and asked it to convert to integers and it was perfect. Simple stuff is a no-brainer so I just do it but this was a bit complicated (for me) and I knew that I wouldn't get it right first time but yep, ~five times faster
Comments
Could you please post a small snippet of the original and the converted code, please? So it not only simply replaced the types of the variables which would result in terrible rounding errors I guess, but did also apply proper scaling and took care to avoid all possible overflows?
I was surprised that it considered the possible overflow issue (this is part of my s-curve routine):
Edit: I didn't inform Gemini that we do indeed support 64bit ints and so it didn't utilize them.
Just a plain translation. Doesn't need to know anything about the hardware. And barely any Basic really.
Get it to use the muldiv64() function to retain precision while not doing intermediate divides.
Impressive. I was a bit sceptical because some people in the simulator forum I'm a member used AI to let them program their Arduinos and the AI made major mistakes, for example ignoring the rotational inertia of servo motors.
So it's getting better all the time and AI becomes a great way to save time. But you still need to know what you are doing and need to be able to verify that the results are somewhat plausible and sound. So good to know that humanity is not going to be totally expendable any time soon.
I also use floating point math whenever possible to avoid overflow and scaling issues. When a loop runs at 1kHz this is perfectly manageable with the P2. But in the PFC and SMPS for my PEMF project the loops run at 50 and 100kHz so efficient ASM code is mandatory.
Not at all. And not impressive either. It's just a translation of the codemaths Mickster supplied. LLMs have been good at translations all along.
This just appeared on Slashdot - https://venturebeat.com/ai/stack-overflow-data-reveals-the-hidden-productivity-tax-of-almost-right-ai-code/
It goes to the trouble of offering recommendations at the end. Although none of the recommendations are to address the elephant in the room, which is that LLMs can't think for themselves. Which is a problem because thinking for themselves is how they're being sold.
Agreed and that's all I want.
I have challenged "AI" with some really inefficient code and asked for the best possible optimization for speed....useless. But Gemini does come in handy.