Replies: 10 comments 28 replies
-
Interesting idea. In my experience, chat AIs like ChatGPT aren't very good at Prolog in general. If an AI was trained to do this specifically, then maybe they would actually be good enough, but training an AI like this is really hard. I also think that Scryer's errors would need to improve drastically with more information before this could be remotely viable, but maybe an sufficiently advanced AI could infer most of it. My intuition is that a "dumb" (in the sense of no trained statistical AI) system like GUPU would be more impactful for less work. |
Beta Was this translation helpful? Give feedback.
-
There are some recent efforts of using LLMs to generate Prolog code: https://arxiv.org/abs/2405.17893 I didn't read the paper only abstract. |
Beta Was this translation helpful? Give feedback.
-
The biggest problem for beginners is #16. Any other effort is just a distraction from it. One needs to master non-termination first. |
Beta Was this translation helpful? Give feedback.
-
PS: I have to apologize; this discussion page is about LLM AI "for training beginners on Prolog and through syntax errors". I was writing (as always) about "a beginner in programming in the Prolog world". My apologies. And thank you... |
Beta Was this translation helpful? Give feedback.
-
For anyone here who saw my talk (still working on getting the slides together), you'll know that an LLM is only as good as the dataset it is trained on. We could absolutely make one that tutors people on pure Prolog or Scryer Prolog, or answer questions about the common library, if (and only if) we create the right dataset (unclear if that is possible or not!). Based on the presentation he gave, @UWN probably has the best dataset in the world for such a purpose, but that's not what he created his Prolog training tool GUPU for. But as a community effort, making such a dataset would be the only hurdle to making the LLM we want. |
Beta Was this translation helpful? Give feedback.
-
#302 would accelerate the localization of a syntax error very cheaply. As for training data, the data is very biased. I myself "harvest" the logfiles for misspellings and unfitting names every semester thus causing such edits from then on. And the syntax is pretty much restricted, roughly: goals and non-terminals must be on a single line which rules out many of the harder-to-spot errors. And then, it's all in German... |
Beta Was this translation helpful? Give feedback.
-
Ah, yes, I should clarify -- we can have an LLM that can give general conversation around Prolog, more like an interactive Q&A. It could not be used to help find logical errors without running Prolog on the backend and possibly handing the work off to GUPU or doing some kind of genetic programming to try to find logical errors. We would actually need the architecture discussed in my talk, and the LLM part would be the easiest (and least useful) part of the system, the Prolog to find the logical errors would be the most difficult part. |
Beta Was this translation helpful? Give feedback.
-
Off topic, again... but LLM and Prolog style together.... An ACM SIGPLAN SPLASH 2024 talk by Erik Meijer: From AI Software Engineers to AI Knowledge Workers https://www.youtube.com/live/_VF3pISRYRc?&t=15429 Speaking about eliminating the programming work, so to speak, in a safe way, with LLM AI combined with a Prolog-like internal language (a.k.a. neuro-symbolic computing?), generated from a natural language input. Some "slogans" from the presentation: "Tools are Relations/Predicates/Facts"; "Primitive tools are facts"; "Chains are conjunctive goals"; "Derived tools are Horn Clauses". |
Beta Was this translation helpful? Give feedback.
-
In the context of my comment here about AI and how I'm learning Smalltalk language/programming system/environment now as one of my personal acts of resistance to the LLM and "generative" AI zeitgeist... I just found a 2024 book by Alan F. Blackwell, "Moral Codes: Designing Alternatives to AI"; it seems to be about "why the world needs less AI and better programming languages": https://direct.mit.edu/books/oa-monograph/5814/Moral-CodesDesigning-Alternatives-to-AI https://moralcodes.pubpub.org/ And, interestingly enough, it seems that in the book the author writes a lot about the Smalltalk world/way... PS: As we are in the Prolog world here, I would add some words about Prolog from Alan Kay's 1993 paper The Early History of Smalltalk: "A look beyond OOP as we know it today can also be done by thinking about late-binding. Prolog’s great idea is that it doesn’t need binding to values in order to carry out computations. The variable is an object and a web of partial results can be built to be filled in when a binding is finally found." [...] "(It’s a pity that we didn’t know about PROLOG then or vice versa, the combinations of the two languages done subsequently are quite intriguing)." |
Beta Was this translation helpful? Give feedback.
-
I've said in my comments here, directly or indirectly, how I see/feel the LLM AI. I said "A". "A" plus Smalltalk programming environment. The Smalltalk world. The Smalltalk way. As some may know, the Smalltalk world pioneered things like test-driven development, refactoring, design patterns in OOP. Let alone the idea of OOP itself, in a pure way. And of course, the idea of the integrated development environment, which Smalltalk is fully incarnating. And at least in that "IDE" context, to paraphrase Tony Hoare's remark on Algol, Smalltalk programming system was so far ahead of its time that it was (and in many aspects still is, I think) not only an improvement on its predecessors but also on nearly all its successors. Smalltalk incarnates a paradox (?) of the system intended and designed for children yet adopted by professional programmers. In fact, "the purpose of the Smalltalk project is to provide computer support for the creative spirit in everyone" (Daniel Ingalls). But back to my "AI + programming" subject. Back to the "B": Smalltalk plus the LLM AI. The "B" is a 2024 free paper "Talking to Objects in Natural Language: Toward Semantic Tools for Exploratory Programming": https://dl.acm.org/doi/10.1145/3689492.3690049 (Or another, "non-AI", great 2022 paper from the authors: A Pattern Language of an Exploratory Programming Workspace [PDF, 3.1 MB]) And some "C" words... Personally (I'm a "civic", "flyweight user" of PLs, not a programmer, and more a student [young enough at heart] than a user), I don't need/want any backpropagation-like AI for anything. But, could be some inspiration in that? Not only in the idea of using LLM AI as a help in programming. In all of that. I mean, inspiration can be everywhere. Raku programming language is the fruit of discontent and tension, maybe even of despair, in Perl community at certain point in time. It was an attempt to make the language anew, to make the language exciting again. Not only a better language with some new features, but revolutionary, using big ideas, even infinitely big, so to speak. And in the Raku world, "Less Than Awesome" (non-descriptive or unrelated to the actual error) an error message is considered a bug. BTW, Raku was designed in a way that it can be learned/used as a natural language; so one can write programs also in a baby-Raku style; or in any way that suits you best. Now, what is the purpose of Scryer project? |
Beta Was this translation helpful? Give feedback.
-
An idea occurred to me today. Would it be possible or even a good idea to augment the Scryer Prolog playground with a chatbot AI that could coach beginners through their syntax errors? The effort involved to train an AI up to a good level of quality might preclude realistically doing this.
Beta Was this translation helpful? Give feedback.
All reactions