ChatGPT Rather Useful Seminar
/Did a Rather Useful Seminar today all about ChatGPT. I said I’d put up the slide deck for the talk, and here it is. Thanks for being a wonderful audience and asking such good questions.
Rob Miles on the web. Also available in Real Life (tm)
Did a Rather Useful Seminar today all about ChatGPT. I said I’d put up the slide deck for the talk, and here it is. Thanks for being a wonderful audience and asking such good questions.
One of the problems with film photography is that when you scan the film negatives you get an image file that doesn’t contain the metadata (called exif data) giving details of when the photograph was taken, the camera used and whatnot. I’ve found a lovely tool called ExifTool which I can use to set the required values in a image file. But what I really wanted was a little program with a Graphical User Interface that lets me select a folder full of images and then set all the images to the specified camera make and model.
I had a quick chat with ChatGPT and it wrote two programs for me. One in Python and the other in PowerShell. I fired up the Python one and it just worked. It made the menu you can see above, using the TkInter interface. The formatting of the window could use a little work, but the program itself seems to do exactly what I want. All I had to do was describe what I wanted and out popped the code. This is both highly impressive and deeply scary.
I’ve now decided that I don’t want to have to enter the make and model of the camera each time, I want the application to use the folder hierarchy to work out what the name should be (i.e. for the above image I would have a parent folder called Mamiya which contains a subfolder called Mamiya Press 23 Standard in which the image files are stored). I’m going to tell ChatGPT about my idea and then get it to create that instead. When I’ve got it all working I’ll put it on GitHub for anyone who has the same problem as me.
As part of the fixing of the Canon Dial camera I needed a spanner to remove the dial spring winder. The spanner is a bit special because it needs to fit between the winder and the camera body. Ivan was kind enough to make me one out of brass, but it turned out to be a little bit too thick. So I thought I’d 3D print one. I’ve been impressed with how tough PETG prints have turned out, and the spanner doesn’t have to be particularly strong, so I was hopeful this might work.
And, since I’m lazy I thought I’d ask ChatGPT to produce the spanner design as a Python program I can run inside FreeCAD. I asked the question and out popped some Python. Which didn’t work. There then followed around fifteen minutes of me explaining why the supplied design was wrong, followed by ChatGPT then producing a differently wrong result along with an explanation of how it had fixed the faults that were still there.
Eventually I gave up and wrote the design in about five minutes or so. These things are very clever, but if they get it wrong it seems that sometimes it stays wrong whatever you do.
I found this article which describes how you can get a something a bit like ChatGPT running on a Raspberry Pi. You need the latest Raspberry Pi 5, and it also needs the largest one with 8Gb of memory but it does work. It’s a bit slow and not very accurate, but it is fun to chat with. I think it is very useful to show slightly broken versions of the technology to folks so that they start to understand their limitations.
I’m now very tempted to make an “exchange” for the Red Phone which runs this engine and rings you back with answers to questions.
Did my talk at DDD 2024 this morning. After a brief panic where it turned out that my HDMI adapter wouldn’t talk to the video system I managed to get everything working (thanks Warren for lending a working adapter). The audience was great and we had a splendid discussion afterwards. You can find the slide deck here.
I might even have a go at printing some of these.
Woke up this morning to the best of news. My session proposal for DDD North has been accepted. Yay! I’m going to be talking about ChatGPT and how it will change your life. Probably. Registration will be open soon, along with the full programme for the day. Really looking forward to it.
I’m painfully aware of the fact that AI has the potential for huge problems on lots of levels. But I don’t think ignoring it is a viable solution. Today I was writing a section of the red phone article (coming to a magazine soon). It was a bit about how JavaScript async and await works. I’d come up with some text and I said this to ChatGPT:
“Would this be a good explanation of async and await: imagine that you could create new versions of yourself at will. If you need to queue to buy something you just create a new version of yourself, put that in the queue and then go about your business. At some point the “new you” will reach the head of the queue, get what you want and then call you and say it has finished before vanishing in a puff of smoke. That’s how awaits and promises work. “
I got this response:
Your analogy is creative and can be a fun way to explain how await and promises work, especially to someone with a vivid imagination. It captures the essence of asynchronous operations, concurrency, and non-blocking behavior, but it might be a bit metaphorical for some audiences who prefer more technical or straightforward explanations.
Here's a breakdown of your analogy:
Creating new versions of yourself represents spawning asynchronous tasks.
Putting a new version in the queue and going about your business relates to the non-blocking nature of asynchronous tasks.
The "new you" reaching the head of the queue and getting what you want symbolizes the completion of an asynchronous operation.
The call and the puff of smoke represent the resolution of a promise and the continuation of your code.
While this analogy can be engaging and illustrative, it's essential to gauge your audience and their familiarity with programming concepts. Some may find it relatable and memorable, while others might prefer a more technical explanation. In many cases, using both a metaphor and a technical explanation can be a helpful way to cater to different learning styles and levels of understanding.
I think this was generally useful critique. It turns out that you ChatGPT doesn’t just answer questions, it can also make sure your text makes sense. Although I can’t help wondering what it would have said if it didn’t like what I’d written.
There’s a lot of muttering about the dangers of AI but there is also no denying that there are situations in which it is genuinely useful. Today I wanted to make sound on a Raspberry Pi from a JavaScript application. So I asked ChatGPT:
ChatGPT came up with an npm module called Speaker which looked like it was just the job. I tried to install it and the installation failed. And this is where it gets proper. I asked ChatGPT:
ChatGPT recognised the error and told me to install the ALSA development headers. I did that and the installation worked fine.
Now, none of this is really rocket science. ChatGPT “knows” this stuff because it has linked things together and can establish the context of a conversation and use this to find the correct items. But it is still darned useful.
My advice with things like ChatGPT is that if you have a context in which you think it might be useful you should try it. If it works, good for you. If it goes wrong you will learn more about problems with the way the technology works. Either way, you will be brushing up on your AI interaction skills, which is something you need to do.
Yesterday I was building the web page used to set the colour of my remote controlled light. At first I went with three sliders to set the amount of red, green and blue (RGB) in the required colour. Then I decided that it might be use Hue, Saturation and Brightness (HSB) as a way of setting the colour. RGB is great, but if you want to change the overall brightness of a light it is a bit of a pain because you have to adjust all three sliders. This is much easier with HSB because you can adjust the brightness directly.
Adding the three new sliders to the web page was easy, but then I was faced with the problem of converting RGB to HSB and back. I wanted the display to be updated so that if you change the Hue the settings of the RGB values change to keep track. So I needed some conversion functions. I thought I’d ask ChatGPT to lend a hand. This is how I would expect the tool to be used. You’ve got a strong idea of what you want to do, experience with the tools and techniques you’re using and you just want the code.
ChatGPT was very happy to drop out JavaScript that it said would work. But it didn’t. Turns out that both Blue and Brightness begin with B. So all the single letter variables in the functions clashed with the parameters. I explained this problem and ChatGPT then dropped out some code that worked.
For me the interesting about this was that I found ChatGPT useful, but I needed to add my programming smarts to solve problems in the code that it produced. This gives me a tiny bit of hope for the future. It doesn’t look like I’m going to be completely replaced by ChatCPT. At least for now…
In 2005 I bought a little toy called 20Q. It’s seems to be able to work out what you are thinking off. It asks 20 questions and then makes a guess. If it gets it wrong it asks 5 more and then tries again. Today we managed to beat it by thinking of “coffee table”. It thought we meant “whiteboard”. So it’s not perfect. But it is darned impressive. And around 20 years ago it was even more so. Of course its not clever. It uses a neural net that was derived from users who logged onto a website to play the game. The developers managed to cram enough of the questions and answers they had harvested from the site into the device to make it work.
I don’t recall it being that much of a sensation when it came out, but it is very interesting to compare it with ChatGPT. Both devices take in lots of information and then respond to queries about it. Both give you the impression that they know what is going on, when really they don’t.
You can’t buy 20Q devices any more, but you can play online and there is also a mobile application. I got out our original device which still works really well and really impressed a certain six year old of our acquaintance.
I’m still playing with ChatGPT. It’s turning out be quite useful. Today I asked it this question:
ChatGPT came back with a useful description of what I needed to and I was able to ask more detailed questions about specific elements of the code that it suggested. I was then able to ask for sample code for the Express pages and the route handling code. This was all very impressive. It didn’t save me a lot of work because I’ve already implemented the system as described, but it was interesting to see that it produced a design and some code that was pretty close to what I’d written.
If you need some code writing you might like to try creating a description of what it needs to do and then dropping it into ChatGPT. At the very worst at least you’ll have a good description of your problem, at best you might get something useful back.
Last week I discovered that ChatGPT can write “In the style of Rob Miles”. I’m not sure if it will do this for everyone. The internet creates bubbles of personalisation so that our searches find the kinds of things that we like, and perhaps ChatGPT does that too. Perhaps if you ask for something “in the style of Rob Miles” you’ll get the version that fits with your interests. Even so, ChatGPT knows about a version of me.
ChatGPT knows me because I’ve been blogging and posting stuff for over 20 years. There’s a lot of Rob on the internet. Although I must admit that all those years ago when I decided to put up a blog post every day I didn’t think that it would be a step on the road to creating a computerised copy.
It is a tiny bit worrying. Do I need to exist any more, now that there is a freely available version online? People have been known to pay me to write things “in the style of Rob Miles” but now anyone can do it. Where will this lead? Folks might get ChatGPT to write horrible things in my style. Even worse, they might get paid instead of me. When I put my blog up I don’t remember ticking a box that said “I allow companies obsessed with shareholder value to mine this data and use it to create machines that can imitate me.” Oh well. Welcome to the modern world. It is a form of immortality I suppose. At least until ChatGPT finds a better Rob Miles
Good grief. That is truly scary. I’ll get my coat…..
After yesterday's post I asked ChatGPT to compare ChatGPT and Eliza. It wrote this:
ChatpGPT and Eliza are both AI-powered conversational systems, but they are quite different in terms of their capabilities and intended uses.
ChatpGPT is a state-of-the-art language model developed by OpenAI that is capable of generating human-like text in response to input from a user. It can be used for a variety of purposes, including generating creative writing, responding to customer inquiries, or engaging in conversation with users.
Eliza, on the other hand, is a much simpler program that was developed in the 1960s. It uses a set of rules and patterns to generate responses to user input, and is intended primarily as a demonstration of how computers can simulate conversation. Unlike ChatpGPT, Eliza is not capable of generating original text and is only able to produce responses based on the rules that it has been programmed with.
Overall, ChatpGPT is a much more advanced and capable system than Eliza, but Eliza has its own unique place in the history of AI.
You could believe a person wrote this, just like you could believe Eliza was a psychiatrist. But if you look closer you see a well put together pastiche of a few facts gleaned from the web, plus one paragraph straight from the sales brochure. It’s cleverly done. But I still don’t think it’s clever.
Many years ago there was a program called Eliza. It was very good at making people think it was human. I came across a Basic version in “More Basic Computer Games”, typed it into my Micro-Tan and got it working. It was great fun. It pretended to be a type of psychiatrist but all it ever did was parrot back to you what you had entered. My favourite bit of the code was the part which changed "I” to “Your” and “Me” to “You” etc so that it could be sent back to the user as if the computer understood what you had just written. What struck me about the reaction of other people to the program was how easy it was to make them think the thing understood what they had entered and, which is much more scary, how keen folks were for this to be the case. They really wanted to believe the software was properly clever. Me, I just wanted to type in things like “I’ve just shot grandma” so that I could get back the response “You’ve just shot grandma? Tell me more about your family”.
I was strongly reminded of Eliza when Ross was showing me how good ChatGPT is at writing programs. He asked for some Arduino code to make lights flash in response to sensors and what came back looked like fairly convincing C. It was very impressive. But it it is still not clever. It is just taking a bunch of stuff from you, looking things up and then crafting a response that chimes with what you expected to see. Sometimes it might combine things in ways you don’t expect, sometimes it will find things that strike you as original. And it might react differently from Eliza if you tell it you just shot grandma. But I don’t think it’s clever like we are. That’s not to say that it won’t change the world though. It will. For one thing search engines are going to get a lot easier to use and a lot more conversational. For another, the essay and the programming exercise are about to get massively devalued as a way of assessing knowledge. Some students will use ChatGPT to craft their submissions. Others will question why they are being asked to write something which can be done better by a machine.
For me the hardest thing about writing and programming has never been about turning out the prose or getting the code to work (although it can be fiddly), it has been working out what the program needs to do or thinking up a good subject and then crafting a narrative that works well with it. I like to think that with more of the “grunt work” out of the way with tools like ChatGPT we could focus our efforts on these human parts of problem solving. I’m looking forward to playing with it.
Rob Miles is technology author and educator who spent many years as a lecturer in Computer Science at the University of Hull. He is also a Microsoft Developer Technologies MVP. He is into technology, teaching and photography. He is the author of the World Famous C# Yellow Book and almost as handsome as he thinks he is.
Make your own programming language. Find out more here.