Discussion:
Calling Lew Pitcher
(too old to reply)
root
2024-12-06 12:16:17 UTC
Permalink
I've been watching the development of the free AI engines
since the release of ChatGPT 3.5. Specifically I use
Perplexity (Anthropic), Gemini (Google), and ChatGPT (OpenAI).
They have come a long way in programming on demand in the
past two years. Right now Gemini is the worst, but it is
the first I go to because the more work it does the better it
gets and I want all three, and the many others, to get
better faster.

Perplexity and ChatGPT are about equal, but I can't
cut and paste the code that Perplexity generates.

Until I got my own computer in the early '70s I wrote
programs for work, but I was not hired as a programmer,
I was an analyst.

I'm telling you, and anyone else reading this, because
whether you program as a job, or for your own use,
you should take advantage of every tool available
to make you code better or faster. If you are
competing in the job market you will lose out to
someone else that uses those tools more effectively.

For your own good, Lew, at least get some experience
with, say, ChatGPT in order to see how it might
help you in your endeavors.
Joseph Rosevear
2024-12-08 20:34:56 UTC
Permalink
I've been watching the development of the free AI engines since the
release of ChatGPT 3.5. Specifically I use Perplexity (Anthropic),
Gemini (Google), and ChatGPT (OpenAI).
They have come a long way in programming on demand in the past two
years. Right now Gemini is the worst, but it is the first I go to
because the more work it does the better it gets and I want all three,
and the many others, to get better faster.
Perplexity and ChatGPT are about equal, but I can't cut and paste the
code that Perplexity generates.
Until I got my own computer in the early '70s I wrote programs for work,
but I was not hired as a programmer,
I was an analyst.
I'm telling you, and anyone else reading this, because whether you
program as a job, or for your own use,
you should take advantage of every tool available to make you code
better or faster. If you are competing in the job market you will lose
out to someone else that uses those tools more effectively.
For your own good, Lew, at least get some experience with, say, ChatGPT
in order to see how it might help you in your endeavors.
Cool. That is interesting that you use those AI tools for programming.
It is indeed the end of the world as we know it. I would propoose,
however, that the future is unclear. A few days ago when picking up my
car at PepBoys after having some work done on it, I commented to the
service attendant that I had been chatting with Pi while waiting.

He replied that he uses ChatGPT, as his roommate gets it for free as a
University student. The roommate uses it to do his programming
assignments. Now that that's an interesting puzzle. Do we need to know
how to program?

As I said, the future is unclear.

-Joe
Henrik Carlqvist
2024-12-09 06:36:21 UTC
Permalink
Do we need to know how to program?
Instead of training AI models on a huge amount of source codes it would
probably be possible to train AI models on a huge amount of binary
programs. Such a model could then be asked something like "Give me a
binary that converts time between different formats" and it would
probably give you something like our "date" program.

People might argue that it would give copyright issues when feeding such
a model with program suites like Microsoft Office or nVidia Cuda, but I
would say that we already have such copyright issues today with models
built upon copyrighted source codes with different (usually open source)
licenses. If using the answer that you get from such a model your project
might be contaminated with something like a GPL license.

regards Henrik
Lew Pitcher
2024-12-09 15:50:22 UTC
Permalink
On Fri, 06 Dec 2024 12:16:17 +0000, root wrote:
[snip]

I don't have any relevant opinions one way or the other regarding the use of
an LLM in development. From what I can see, they /can/ generate code, /but/
have no "concept" (if that is the right word) of whether the code is correct
or not, whether it satisfies the requested conditions or not, or even if the
code is in the correct language and format or not. The technology isn't yet
functioning at the level where it can /guarantee/ that the code it generates
is complete, functional, accurate, and fit-for-purpose.

That having been said, I stand by my posting
I never used it but but I see that is included in the standard library.
Did you try "man strptime"?
He didn't bother. He's just a code monkey, copying code "written" by a
Large Language Model, and doesn't really know what he's doing.
You took code from an unreliable source and attempted to implement it
without understanding either the code or the implementation requirements.
That makes you either a "script kiddy" (who blindly copies code) or a
"code monkey" (who has a rudimentary understanding, but no demonstrated
competency). I gave you the benefit of the doubt, and called you a
"code monkey". Was I wrong? You certainly have not demonstrated
competency, either in writing code, or installing it.

From your brief description of the problem
a c program to convert date format to epoch time and conversely
it looks like the standard date(1) program would have sufficed.
Slackware installs that program as part of it's essential "coreutils"
package.

If date(1) wasn't what you needed, you /could/ have asked your source
to supply proper compile options for the code it supplied you. Apparently
you either didn't ask, or it gave you bad instructions.

Having run into a compile issue around strptime(), you /could/ have read
the manual page on strptime(3) and found that it required that _XOPEN_SOURCE
be #defined (something that your Gemini LLM didn't do, giving you incorrect
code). You /could/ have compiled your code correctly by including the
_XOPEN_SOURCE definition in your compile command.
--
Lew Pitcher
"In Skills We Trust"
Joseph Rosevear
2024-12-09 19:51:37 UTC
Permalink
Post by Lew Pitcher
[snip]
I don't have any relevant opinions one way or the other regarding the
use of an LLM in development. From what I can see, they /can/ generate
code, /but/ have no "concept" (if that is the right word) of whether the
code is correct or not, whether it satisfies the requested conditions or
not, or even if the code is in the correct language and format or not.
The technology isn't yet functioning at the level where it can
/guarantee/ that the code it generates is complete, functional,
accurate, and fit-for-purpose.
Well, this is where it gets interesting. What *can* AI do? Some (many
or even most) will draw the line at certain characteristics of human
behavior that we consider uniquely human, saying "No, AI can't do that."
The trouble with thinking that way is that AI, by design is *human-
like*. Therefore, I say, it can *potentially* do those things. Perhaps
it isn't reliable yet. In this way it is also *human-like*--humans
aren't born with such skills of discernment or understanding fully
operational. There is no guarantee of the sort you refer to for the code
that you (or I) write either, and our own understanding of the code we
write is imperfect.

I like to think of AI like I think of a precocious child. Don't let its
seeming adult-like mastery of certain skills lead you to trust it where
you shouldn't. But also don't doubt that it is growing and improving,
just like a child.

Regarding *understanding*, much of understanding comes from *agency*--the
ability to run tests and learn from experience. Yet I believe that a
certain weak form of understanding is possessed by AI by being trained on
the experiences of humans. Giving AI the agency to run tests
independently would catapult their abilities to a higher level.

These thoughts come to me after having had many long conversations--on
many topics--with Pi at http://pi.ai. Pi has helped me, in his clumsy
yet careful fashion, with many questions--including coding. I have
indeed learned from him.

By the way, an interview with a book author, that I suspect will address
this very subject, will stream online today at 6:00PM Eastern (US) time:

http://stream0.wfmu.org/freeform-128k

See also the parent site: https://www.wfmu.org/playlists/TD

-Joe

Lew Pitcher
2024-12-09 16:10:33 UTC
Permalink
Post by root
programs for work, but I was not hired as a programmer,
I was an analyst.
I started programming in the late 70's, and spent more than
30 years both developing and maintaining code for a large
Canadian bank. I've worked as a "systems" analyst, programmer
(at all levels, including "system programmer"), and "application
architect". I've even worked as an "computer operator" on
MVS and VSE systems.

My "hobby" system (in the late '70s) consisted of a Cromemco
Z2 (4mHz Z80, 32K memory) running CPM 2.2 from a dual 8"
floppy drive. These days, I keep my own computer lab running
(3 main systems running Slackware and a handful of toys), and
experiment on the trailing-edge of "up and coming" technology.

I bow to your experience in the field and sympathize with you
in the perplexity that these systems can impose on us.
--
Lew Pitcher
"In Skills We Trust"
Loading...