![]() ![]() ![]() |
|
But dismissal letters written by AI definitely sound better! 😁
- NET: FTTH, OPNsense, 10G backbone, GWN APs, ipPBX
- SRV: 12 RU HA server cluster, 0.1 PB storage on premise
- IoT: thread, zigbee, tasmota, BidCoS, LoRa, WX suite, IR
- 3D: two 3D printers, 3D scanner, CNC router, laser cutter
In addition to it being a bad time for someone looking for their first job in the tech industry (gaming specifically), I'm certain that AI is also contributing to my son's issues landing his first full time job. From what relatives in various tech fields have told us, much of the donkey work that would have been done by an intern cutting their teeth in a junior role is now somewhat automated by AI. Not 100% I'm sure, but still a factor none the less.
I personally think it's a bubble that will burst. No one has yet worked out how to make money from it and they are spending billions. There are also reports that they are running out of data to train on and therefore the growth in capability will/has stalled. Will we still have AI? Yes but in a much different form from today. I personally think the predictions of AI taking over and all of us being made redundant are completely overblown by companies that want you to believe that you need their products to survive the AI apocalypse.
Data for A.I. Training Is Disappearing Fast, Study Shows - The New York Times
Until AI has actual comprehension and isn't just cause and effect, it will have limited value in technical spaces.
ML and AI is essentially just picking what has been determined to be the most statistically likely outcome, based on patterns. There's no intuition there.
Anything I say is the ramblings of an ill informed, opinionated so-and-so, and not representative of any of my past, present or future employers, and is also probably best disregarded.
I try to take a balanced approach to AI. I too think it's in a major bubble phase, though is starting to be a useful tool in some limited scenarios (Note I say tool, not replacement for real people.. I think it should be treated as such)
I think the main problem is more PHBs thinking it's this amazing magic thing that is a huge quantum leap because they've bought into all the hype. I've yet to see a demo that would convince me that it's anything other than a refinement of various things that have gone before... remember when artificial neural networks were all the rage, and skynet was right on the horizon?
PS: And I also tend to buy a bit into the narrative which I've heard a few people repeat: Current AI models are little more than massive plagiarism engines.
"I was born not knowing and have had only a little time to change that here and there." | Octopus Energy | Sharesies
- Richard Feynman
Does anyone remember when Blockchain was going to revolutionise how we were going live our lives? Apart from a few bored monkey NFTs I haven’t heard anyone mention Blockchain in a very long time.
ML and AI definitely hold more value in the everyday workplace than the blockchain.
Equipping people with best practices on LLM usage, and process optimization using AI and ML is the best path forward. A lot of expectations are that configuration, operation and monitoring tasks will be entirely taken up by AI in the next x years. I don't see this happening in any but the most basic spaces (push button receive bacon sort of scenarios). Anywhere that requires intuition, interpretation and decision making is the domain of the human still and will remain so until we have actual General AI, rather than Generative AI. Unfortunately, this means that a lot of the entry level jobs and basic "wage slave" jobs are at risk, leaving physical labour and mid level and above roles as the safest places to be in theory.
In practice, "Due to ever changing business needs"...
Anything I say is the ramblings of an ill informed, opinionated so-and-so, and not representative of any of my past, present or future employers, and is also probably best disregarded.
toejam316:
ML and AI definitely hold more value in the everyday workplace than the blockchain.
Equipping people with best practices on LLM usage, and process optimization using AI and ML is the best path forward.
This. So much this. I would +10 if I could. I completed BE and some postgrad studies around 2002, which had pretty large ML, ANN and image processing, machine vision, etc. Not so much LLM as that I think was pretty limited unless you had IBM\google type resources. Still the impression I got was a lot of research was going into this stuff even then.
They've definitely made some pretty big advances in the area in the last few years, especially in the corporate/startup world, rather than pure research communities. But it also feels a lot like there's been a whole lot of hype generated around it more recently, which has resulted in crazy amounts of money, computing power, etc being thrown at it, along with a whole bunch of new buzzwords and cool controlled demos, which have made the media pick up on it in a big way.
At the end of the day, IMO it's a tool, best to learn how to use it like any other tool. Just don't get over excited by it. And if you're setting policy for an organisation or team don't just parrot on about how everyone should be "automating and using AI" - actually make sure your people have the time and money to do some proper training on these tools.
"I was born not knowing and have had only a little time to change that here and there." | Octopus Energy | Sharesies
- Richard Feynman
LLMs didn't exist in 2002? Language processing and statistical language models existed in different states of maturity, but certainly not large. The LLM architecture we know today came from the work leading up to and after the transformer paper Google published in 2017.
They weren't labelled LLMs at the time, but I seem to recall very early what I would call precursors being around.
"I was born not knowing and have had only a little time to change that here and there." | Octopus Energy | Sharesies
- Richard Feynman
Not really. Neural nets and by extension ImageNet were the precursors to LLM, so that's still 2010s. Statistical isn't comparable to transformer.
Hmm, have I crossed into a parallel dimension? :D Neural nets were only 2010s?? ANN research has been around since 70s AFAIK. Again, this was research communities\papers not commercial applications if that's what you mean.
EDIT: I stand corrected - assume you mean you don't consider statistical NNs to be precursors to deep neural nets? Fair enough. I await our AI overlords ;-)
"I was born not knowing and have had only a little time to change that here and there." | Octopus Energy | Sharesies
- Richard Feynman
I don't really consider what is being offered these days is actually A.I., rather it seems to be mostly machine learning and automation and math models now.
Software Engineer
(the practice of real science, engineering and management)
A.I. (Automation rebranded)
Gender Neutral
(a person who believes in equality and who does not believe in/use stereotypes. Examples such as gender, binary, nonbinary, male/female etc.)
...they/their/them...
|
![]() ![]() ![]() |