Meh It's annoying because like none of this is really "AI" imho. They decided that current neural networks were close enough to gen AI and every company started hyping/lying about general intelligence being here along with it's capabilities.
That and the whole deal with them scraping everyone's data including stuff they don't have licenses to use; which is only more egregious given how companies like Meta, Microsoft etc all violently defend their own copyrights and licensing agreements.
That and all the ML tooling is absolutely terrible; it's all glued together proprietary binary blobs for gpus and infinite amounts of python dependencies to interface with C/Cpp libraries in some of the most inefficient ways possible.
It's pretty crazy seeing the kind of projects/code that's being produced now as well. You'll see someone trying to get their embedded system to use a gpt api to handle OCR for numbers instead of just rolling their own solution, or using the cstdlib to do a tiny bit of compute work.
It's not even that hard anymore with the 32bit MCUs to do some SIMD as uint8_t packed into uint32_t matrices on an MCU. Heck I've seen packed booleans used for running decent perceptrons on resource constrained hardware.
So yeah I agree with USerID that the current state of the industry is for sure a bubble.
That and the whole deal with them scraping everyone's data including stuff they don't have licenses to use; which is only more egregious given how companies like Meta, Microsoft etc all violently defend their own copyrights and licensing agreements.
That and all the ML tooling is absolutely terrible; it's all glued together proprietary binary blobs for gpus and infinite amounts of python dependencies to interface with C/Cpp libraries in some of the most inefficient ways possible.
It's pretty crazy seeing the kind of projects/code that's being produced now as well. You'll see someone trying to get their embedded system to use a gpt api to handle OCR for numbers instead of just rolling their own solution, or using the cstdlib to do a tiny bit of compute work.
It's not even that hard anymore with the 32bit MCUs to do some SIMD as uint8_t packed into uint32_t matrices on an MCU. Heck I've seen packed booleans used for running decent perceptrons on resource constrained hardware.
So yeah I agree with USerID that the current state of the industry is for sure a bubble.