- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
There are a handful of use cases I’ve seen generative AI be useful
- Searching
- Partner programming (how do I…)
- Chatbot (as a novelty thought, gets boring quick, and the only great ways of controlling it in a way that would be safe for business is by adding more ai)
And a few more probably.
I spent about 6 months deep diving into how it all worked. I was having dread that it would take my job and was determined to learn about it. What I learned is that there are many many serious pitfalls that seem to be more or less ignored or unknown by businesses and people covering it.
I won’t say it’s as bad as blockchain, there are usages for it, but the hype is pretty damn close. Business thinking it will save them billions and they can start getting rid of developers. Tech bros lining up to say it’s going to bring on the singularity.
Eh. It’s cool. I wouldn’t say it’s going to bring the second coming of Jesus.
Searching
Literally the worst possible usage. They’re syntax generators, not search engines, and not knowledge fonts.
I don’t know how to say this in a less direct way. If this is your take then you probably should look to get slightly more informed about what LLMs can do. Specifically, what they can do if you combine them with with some code to fill the gaps.
Things LLMs can do quite well:
- Generate useful search queries.
- Dig through provided text to determine what it contains.
- Summarize text.
These are all the building blocks for searching on the internet. If you are talking about local documents and such retrieval augmented generation (RAG) can be pretty damn useful.
That’s not entirely fair either though… They can incorrectly summarize, omit important information, or just make stuff up.
True, though that isn’t all that different from people doing knee jerk responses on the internet…
I am not claiming they are perfect, but for the steps I described a human aware of the limitations is perfectly able to validate the outcome. While still having saved a bunch of time and effort on doing an initial search pass.
All I am saying is that it is fine to be critical of LLM and AI claims in general as there is a lot of hype going on. But some people seem to lean towards the “they just suck, period” extreme end of the spectrum. Which is no longer being critical but just being a reverse fanboy/girl/person.
All I am saying is that it is fine to be critical of LLM and AI claims in general as there is a lot of hype going on. But some people seem to lean towards the “they just suck, period” extreme end of the spectrum. Which is no longer being critical but just being a reverse fanboy/girl/person.
Fair, nuance is an endangered species.
Ask it to search or list all of Bach’s children and you can see the wheels completely fall off. It was inventing people with actual fake birth and death dates and also gave me some of Handel’s children!
What do you mean by “it”? The chatGPT interface? Could be, but then you are also missing the point I am making.
After all, chatGPT is just one of the possible implementations of LLMs and indeed not perfect in how they implemented some things like search. In fact, I do think that they shot themselves in the foot by implementing search through bing and implementing it poorly. It basically is nothing more than a proof of concept tech demo.
That doesn’t mean that LLM’s are useless for tasks like searching, it just means that you need to properly implement the functionality to make it possible. It certainly is possible to implement search functionality around LLMs that is both capable and can be reviewed by a human user to make sure it is not fucking up.
Let me demonstrate. I am doing some steps that you would normally automate with conventional code:
I started about by asking chatGPT a simple question.
*“I want to provide a list of all the children bach had. What would be a suitable google query”
It then responded with.
“To compile a comprehensive list of Johann Sebastian Bach’s children, a suitable Google query would be: “List of Johann Sebastian Bach’s children names”. This query is specific enough to indicate that you’re looking for a detailed list, including the names of all the children, which should yield targeted results.”
The following step I did manually, but is something you would normally have automated. I put the suggested query in google, I quickly grabbed the first 5 links and then put the following in chatGPT.
The top 5 results in google are the pages I provided below. Browse to each page, and answer the query “List of Johann Sebastian Bach’s children names”. Just answer the query, but let me know if there is conflicting information on the page.
https://en.wikipedia.org/wiki/Bach_family
https://www.classicfm.com/composers/bach/guides/children/
https://www.bachonbach.com/100-bach-faq-and-soon-there-are-500-bach-faq/faq-26-what-were-the-names-of-all-of-johann-sebastian-bach-s-kids/
https://www.spiritsound.com/music/bachchildren.html
https://www.grahamsmusic.net/post/bach-s-wives-and-childrenIt then proceeded to give me the following answer
Johann Sebastian Bach had 20 children in total. With his first wife, Maria Barbara Bach, he had seven children, four of whom survived into adulthood: Catharina Dorothea, Wilhelm Friedemann, Carl Philipp Emanuel, and Johann Gottfried Bernhard. With his second wife, Anna Magdalena Wilcke, he had 13 children, of whom six survived into adulthood: Gottfried Heinrich, Elisabeth Juliane Friederica, Johanna Carolina, Regina Susanna, Johann Christoph Friedrich, and Johann Christian
Going over the search results myself seems to confirm this list. Most importantly, except for the initial input, all of this can be automated. And of course, a lot of it can be done better, as I didn’t want to spend too much time.
Gpt is fantastic at search. Like, check its work but it’ll check hundreds of pages of results way faster than you can.
There may be exceptions but everything I’ve seen from AI programming is next level trash. It’s like copy pasting from Stack Overflow without the thousand comments all around it saying DO NOT DO THIS!
When ChatGPT was just released to the general public I wanted to try it out. I had it write a script to handle some simple parsing of network log files. I was having some intermittent issue with my home network I couldn’t figure out, so I had logged a lot of data and was hoping to figure out the issue. But I needed to filter out all the routine stuff that would be just noise in the background. I could have written it myself in about an hour, but figured hey maybe ChatGPT can help me bang it out in a couple of minutes.
The code it wrote looked at a glance to be very good and I was impressed. However as I read it, it turned out to be total nonsense. It was using variables and declaring them after. Halfway the script it seemed to have switched to a completely different approach leaving some sort of weird hybrid between the two. At one point it had just inserted pseudo code instead of actual functional code. Every attempt to get it to fix it’s issues just made it worse. In the end I just wrote the script myself.
I’ve seen examples from other people who attempted to use it and it’s just bad. It’s like having a junior programmer high on weed writing your code, checking it and fixing it takes more time than just writing the code itself.
Then there’s the issue of copyright, a lot of the training data wasn’t licensed and stuff like Github Copilot want to add your data to it’s training set if you want to use it. That’s not OK on many levels and not even possible for people working on corporate codebases.
A lot of programmers work on big code bases, with things like best practices and code standards. Not only does the AI not know the codebase and thus wouldn’t know how to do a lot of stuff in that codebase, it also doesn’t know about the best practices and code standards. So for those kinds of situations it isn’t useful.
I feel like people ask it to do some first year student programming tutorial tasks and the result looks somewhat like what one would expect and conclude the thing can actually write code. It really can’t in reality and probably shouldn’t even if it could.
That’s what I mean though. It helps give you different ideas, maybe looking at the problem a different way, but I don’t trust the garbage it spits out. At least half the time it makes something up, or it gives a solution that just won’t work, and even then it will double or even triple down on it.
Yeah but it’s missing the discussion around the code, why it’s bad, why it may have been once correct but wrong now etc. It strips the context from the search result, which in my opinion makes it useless.
Well said… Thanks for spelling it out!
I’ve only managed to find very basic info. For programming I got fed up of being recommended apps, features and settings that didn’t exist.
It’s good for anything that has thousands of examples on stack overflow.
For example, every time I end up trying to work with pandas, I always forget the syntax and it’s generally good here.
Anything unusual, or that is sufficiently complicated that I wouldn’t be able to Google for, and just forget it.
It’s decent for generating ideas or names for fiction. I’ve used it for tabletop stuff a couple of times to give me NPC names or lists of personality traits, and it’s good sometimes for breaking writers block when I get stuck on some detail and I can’t figure out what word I want to use or what to name something. You can usually get it to give you some sort of okay suggestions, but the volume of ideas is usually enough to spark a better idea for me. The only weird thing I’ve noticed is that GPT4 (or whatever flavor bing/copilot is currently using) REALLY likes alliteration to a degree that is downright corny. It’s kinda weird but sort of funny honestly.
They are very useful for outlining and similar “where do I start” writing projects. They help break the dam and just get some damn words on the screen, at which point it’s often easy to continue and flesh things out to a complete thought.
I’ve refused to indulge in using them for searching. Do they cite their sources now? All I’ve seen are screenshots where it appears you’re just supposed to take their word for it. Curious if that’s changed.
Some like Phind or Perplexity cite their sources. And they give you directly the answer you’re looking for without having to search it in a mess of “subscribe to our newsletter”, “other articles that may interest you”, 3 paragraphs of “if you read this article, you will know what you want to know”, “special promotion for you”,…
I may grudgingly try it then. lol Though I’ve gotten quite good at filtering out the chaff from search results.
Bing Chat has become my go-to search engine for situations where I’m not looking for a specific website or other such resource, and instead want some kind of information or knowledge. I’d recommend giving it a shot. It does a websearch in the background, puts the results into its hidden context, and then builds an answer for you based on the information it dredged up, complete with links. You can then clarify your question or ask for further details and get a back-and-forth going, it’s really handy. I’d recommend giving it a shot, I believe it works without needing an account now.
Oh, I should note: don’t use it like an old-school search engine where you just type a couple of keywords in. Be conversational and give context to your search. Say for example “I’m planting a garden in Witchita, Kansas. What climate zone is it, and what sorts of flowers grow well there?” And then perhaps follow that with “Are any of those attractive to hummingbirds?” Or whatever. That should help it figure out what information to look for and how to distill what you want to know from it.
This article is not saying a lot tbh
Oh, wow. It really isn’t. Axios usually does really good reporting, but that looks more like the outline / notes for a story than something ready to publish.
I strongly dislike generative LLMs (I refuse to call them AI) for a host of reasons, but the biggest reason has less to do with the tech and more to do with the people / upper brass who are trying to replace human jobs with them and expecting it to just work (while salivating at the thought of pocketing the salary of the displaced human employee).
I don’t think the article really calls that out explicitly, but they are saying it’s not living up to the hype. As far as progress goes, I suppose that’s a good first step.
For real, it almost felt like an LLM written article the way it basically said nothing. Also, the way it puts everything in bullet points is just jarring to read.
Frankly, corporations seem to have no idea how to use LLMs. They want them to be a public facing company representative, which is exactly what LLMs can’t do well. Where they accel is as an assistant.
Want to figure out what scale you’re playing a song in? It’s great at that. I’ve had it give me chords to go with scales too, or even asked for some scale options based on the feeling of the sound.
It’s also great for looking for terms in other languages. I’ve got some ranged weapon abilities in my tabletop rpg. I knew i wanted one of them to be called pistolero, but I didn’t know the terms fusilero or escopetero, and might not have found them on my own, but chatgpt came up with them right away.
I’ve also learned that it’s great at looking up game guides and providing hints that aren’t spoilers without giving the puzzle away. I had it generate results for the Lady’s Maze in Planescape: Torment and the Water Temple in Ocarina of Time. Amazing hints without giving it away.
If you have your own brain and want to off-load some simple queries, it’s great. If you want to use it in place of a human brain to talk to customers, you’re barking up the wrong gpt.
That music example is how I’ve used them, it really is spot on. Key, tempo, scale, overlapping scales that could be used, plus factoids included. It really can be very helpful.
improving and integrating the technology is raising harder and more complex questions than first envisioned
Many people not only envisioned but predicted these problems as soon as the hype cycle began.
Interesting article. I’d have loved to see some stats on how LLM investment and LLM startups are doing.
AI doesn’t have to be good, it only needs to be good enough. Even if it’s just barely functional, if it’s cheaper than paying a human, then it will be used by capitalists.
deleted by creator