Auckland University
of Technology [this establishment no longer exists...]
On Wednesday, January 25, 2017 at 4:30:42 PM UTC+13, pete dashwood wrote:Technical Institute (ATI) was granted university status and renamed itself to AUT.
Auckland University
of Technology [this establishment no longer exists...]
You have that the wrong way around. AUT is still around. In 2000 the Auckland
So, in the 70s she would have graduated from ATI, but that still exists justrenamed to AUT, and still uses the Wellesley St campus but has several others as well.
It's interesting to see that what I have been doing for the last half >century is now going to change, radically...
(Where have I heard that before?)
Take a look at this: >http://www.infoworld.com/article/3160088/application-development/want-to-be-a-software-developer-time-to-learn-ai-and-data-science.html
One of my colleagues summed up the general feeling by saying:"Pete,
don't you find it a bit frightening that a computer can reach a result >without anyone knowing how it got there?" (As I had programs under >development for work, that did that every day [it's called:
"Debugging"], I wasn't in the least bit phased by it...)
I remember replying: "No, I find it exciting...".
There was no shortage of scoffers and naysayers, and in some
cases, the sheer arrogance of "computer programmers" (how could an
inanimate machine ever be able to do what I do? Without me, that machine
is just a useless assembly of wires and metal...) made me want to punch >people.
It's interesting to see that what I have been doing for the last halfHi Pete
century is now going to change, radically...
(Where have I heard that before?)
Take a look at this: http://www.infoworld.com/article/3160088/application-development/want-to-be-a-software-developer-time-to-learn-ai-and-data-science.html
I remember doing some experiments (in COBOL) back in the last century
after I read an article on heuristic programming. The result was a
program that could (eventually) find its way through a maze. There was
no guarantee that the way it found was the BEST way, but, provided the
maze COULD be solved, it would solve it.
Heuristic programs start off with a heuristic (a "rule of thumb", if you like; for example: "Every time you encounter a wall, turn left".)
Another part of the program monitors progress and checks if the same
junction is being reached, so it detects if it is going in circles and modifies the heuristic at that point...). After a period of time or a
number of changes of direction (decided by the programmer initially, but could be modified by the program in an advanced incarnation), the
heuristic is modified and the process is iterated. Obviously, depending
on the power of the processor, several billion alterations and tries can
be made in a relatively "short" period of time, and, eventually a
successful solution is reached. (The final heuristic depends on the size
of the maze, but it will typically be a long string of "Right"s and
"Left"s.)
I got really excited about this and discussed it with some of my peers
over coffee at work. The general reaction was one of horror...
The only way to "know" how the solution was reached would be to log
every change made (including the discarded ones) and the "audit trail" (containing billions of lines) would require several Human lifetimes to analyze, so you'd need another computer to do that...
One of my colleagues summed up the general feeling by saying:"Pete,
don't you find it a bit frightening that a computer can reach a result without anyone knowing how it got there?" (As I had programs under development for work, that did that every day [it's called:
"Debugging"], I wasn't in the least bit phased by it...)
I remember replying: "No, I find it exciting...".
The applications of heuristic programming were kind of limited to things
like production planning (I remember looking at an heuristic corrugator cardboard scheduler, written in Fortran, which determined the best
places to cut long runs of corrugated cardboard for packaging. One of my colleagues, a young girl who had just graduated from Auckland University
of Technology [this establishment no longer exists...] took the
corrugator scheduler "under her wing" and became a world expert in the
use of the package.)
All of these events were in the early 1970s.
They were, at least for me, the first inkling that a computer maybe did
not have to be told how to do everything... the first glimmers of
"artificial intelligence".
Subsequently, as IT gathered momentum and some of the world's best and brightest were attracted to the newly emerging "Computer Science", the
early attempts at neural networks were developed, with inconclusive
results. There was no shortage of scoffers and naysayers, and in some
cases, the sheer arrogance of "computer programmers" (how could an
inanimate machine ever be able to do what I do? Without me, that machine
is just a useless assembly of wires and metal...) made me want to punch people.
But, today we are poised on the brink of another great leap.
The deep networks and heuristics used in today's AI are quantum leaps
above the first stumbling steps discussed above.
Data CAN be used to train systems and that will just be the beginning.
I remember writing a database management system called "DIC" - Database Interface Control - (for VSAM/KSDS on IBM mainframes), back in the late 1970s. It was eventually superseded when the company standardized on the
new Relational Database technology in the mid 1980s and went to DB2. But
for around 6 years DIC managed the distribution of photo-copiers across
14 countries in Europe and it was well received by everybody except the French. (No surprises there... :-))
The point is that when I designed and built it I remember saying to my
team: "Because DIC will control the interface to the data, we could
write a module that will monitor how and where data is accessed. We
could then use that information to manually adjust and organize the
physical VSAM files for optimal access...or, we could let DIC interface
to IDCAMS and do it automatically."
A case of the actual data causing re-organization of itself, depending
on how it was being accessed. (Most modern RDB systems have modules that
can do this, at least at a rudimentary level. Good DBAs use this data to control physical organization of datasets.)
I posted the link above because there may be some people here who share
my enthusiasm for the idea of computers programming themselves. The
article is quick to point out that a Human is required to broker the
data being used for training, so they postulate that the ROLE of a
programmer will change.
"Releases will be based on the software being trained to another level; updates will be based on new data sets and experiences."
This conjures the image of a programmer as a "lion tamer", whip in one
hand, template in the other... :-)
with that happy thought I have to get back to writing some particularly complex code that has had my head spinning for the last 2 days... I'm
only sorry that I probably won't see the day when THAT code could write itself...
Pete.
--
I used to write COBOL; now I can do anything...
In article <eeqkevF17rfU1@mid.individual.net>,
pete dashwood <dashwood@enternet.co.nz> wrote:
It's interesting to see that what I have been doing for the last half
century is now going to change, radically...
(Where have I heard that before?)
Take a look at this:
http://www.infoworld.com/article/3160088/application-development/want-to-be-a-software-developer-time-to-learn-ai-and-data-science.html
[snip]
One of my colleagues summed up the general feeling by saying:"Pete,
don't you find it a bit frightening that a computer can reach a result
without anyone knowing how it got there?" (As I had programs under
development for work, that did that every day [it's called:
"Debugging"], I wasn't in the least bit phased by it...)
I remember replying: "No, I find it exciting...".
There should be another half-century of experience under your belt, Mr Dashwood.
told 'The computer tells us (blatant and obvious representation of an alternate reality which bears no relation on the one in which you believe
to find yourself and causes you inconvenience) so it must be so'?
For example... being required to register for school at age 108. Being notified that one is in default of a fully-paid loan. Receiving magazine subscription-notices cheerfully declaring 'Now That You're Dead, You're Eligible for a Discount!'
In the article 'Jim McHugh, vice president and general manager for
Nvidia's DGX-1 supercomputer', asserts 'We're using data to train the software to make it more intelligent.'
The assumption that intelligence is the result of training causes me to
grin, albeit wearily.
There was no shortage of scoffers and naysayers, and in some
cases, the sheer arrogance of "computer programmers" (how could an
inanimate machine ever be able to do what I do? Without me, that machine
is just a useless assembly of wires and metal...) made me want to punch
people.
Wanting to punch people for arrogance is a sure sign of humility... and I
am the King of England, God save the Me!
On Wednesday, 25 January 2017 13:30:42 UTC+10, pete dashwood wrote:<snipped>
It's interesting to see that what I have been doing for the last half
century is now going to change, radically...
(Where have I heard that before?)
Take a look at this:
http://www.infoworld.com/article/3160088/application-development/want-to-be-a-software-developer-time-to-learn-ai-and-data-science.html
Hi Pete
Interesting article.
I like the idea that Data CAN be used to train systems and that will just bethe beginning.
I am a bit of a skeptic about media views of AI but can see it advancing.
I think that Google is doing some of this today with the way a browserremembers your searches and anticipates keystrokes. For instance all I have to do is type "you" and Google anticipates www.youtube.com. This is intelligence by keeping and using data. It is even specific to my logon. There is also worldwide sharing of data so one can place a photo in Google Earth for all the world to see or find by search. I realize this is short of what you are talking
I have a challenge for you.own data structure?
Which data structure does Google use in its search engine or does it have its
Is it a conventional database or discrete ISAM files or other?
In the late 1970's, Digital Equipment Corporation (DEC) (remember them?)
Today, I think me best internet friends are Google Chrome and YouTube andthrow in email. I can find and learn so much because of this worldwide sharing of "searchable" data.
Advertising is trying very hard to use AI by examining your searches andbuying habits and targeting ads. For instance, if Coles finds you bought disposable nappies, their AI may assume there is a baby and therefore target ads for other baby products.
Greg
It's interesting to see that what I have been doing for the last half
century is now going to change, radically...
(Where have I heard that before?)
Take a look at this: http://www.infoworld.com/article/3160088/application-development/want-to-be-a-software-developer-time-to-learn-ai-and-data-science.html
I remember doing some experiments (in COBOL) back in the last century
after I read an article on heuristic programming. The result was a
program that could (eventually) find its way through a maze. There was
no guarantee that the way it found was the BEST way, but, provided the
maze COULD be solved, it would solve it.
Heuristic programs start off with a heuristic (a "rule of thumb", if you like; for example: "Every time you encounter a wall, turn left".)
Another part of the program monitors progress and checks if the same
junction is being reached, so it detects if it is going in circles and modifies the heuristic at that point...). After a period of time or a
number of changes of direction (decided by the programmer initially, but could be modified by the program in an advanced incarnation), the
heuristic is modified and the process is iterated. Obviously, depending
on the power of the processor, several billion alterations and tries can
be made in a relatively "short" period of time, and, eventually a
successful solution is reached. (The final heuristic depends on the size
of the maze, but it will typically be a long string of "Right"s and
"Left"s.)
I got really excited about this and discussed it with some of my peers
over coffee at work. The general reaction was one of horror...
The only way to "know" how the solution was reached would be to log
every change made (including the discarded ones) and the "audit trail" (containing billions of lines) would require several Human lifetimes to analyze, so you'd need another computer to do that...
One of my colleagues summed up the general feeling by saying:"Pete,
don't you find it a bit frightening that a computer can reach a result without anyone knowing how it got there?" (As I had programs under development for work, that did that every day [it's called:
"Debugging"], I wasn't in the least bit phased by it...)
I remember replying: "No, I find it exciting...".
The applications of heuristic programming were kind of limited to things
like production planning (I remember looking at an heuristic corrugator cardboard scheduler, written in Fortran, which determined the best
places to cut long runs of corrugated cardboard for packaging. One of my colleagues, a young girl who had just graduated from Auckland University
of Technology [this establishment no longer exists...] took the
corrugator scheduler "under her wing" and became a world expert in the
use of the package.)
All of these events were in the early 1970s.
They were, at least for me, the first inkling that a computer maybe did
not have to be told how to do everything... the first glimmers of
"artificial intelligence".
Subsequently, as IT gathered momentum and some of the world's best and brightest were attracted to the newly emerging "Computer Science", the
early attempts at neural networks were developed, with inconclusive
results. There was no shortage of scoffers and naysayers, and in some
cases, the sheer arrogance of "computer programmers" (how could an
inanimate machine ever be able to do what I do? Without me, that machine
is just a useless assembly of wires and metal...) made me want to punch people.
But, today we are poised on the brink of another great leap.
The deep networks and heuristics used in today's AI are quantum leaps
above the first stumbling steps discussed above.
Data CAN be used to train systems and that will just be the beginning.
I remember writing a database management system called "DIC" - Database Interface Control - (for VSAM/KSDS on IBM mainframes), back in the late 1970s. It was eventually superseded when the company standardized on the
new Relational Database technology in the mid 1980s and went to DB2. But
for around 6 years DIC managed the distribution of photo-copiers across
14 countries in Europe and it was well received by everybody except the French. (No surprises there... :-))
The point is that when I designed and built it I remember saying to my
team: "Because DIC will control the interface to the data, we could
write a module that will monitor how and where data is accessed. We
could then use that information to manually adjust and organize the
physical VSAM files for optimal access...or, we could let DIC interface
to IDCAMS and do it automatically."
A case of the actual data causing re-organization of itself, depending
on how it was being accessed. (Most modern RDB systems have modules that
can do this, at least at a rudimentary level. Good DBAs use this data to control physical organization of datasets.)
I posted the link above because there may be some people here who share
my enthusiasm for the idea of computers programming themselves. The
article is quick to point out that a Human is required to broker the
data being used for training, so they postulate that the ROLE of a
programmer will change.
"Releases will be based on the software being trained to another level; updates will be based on new data sets and experiences."
This conjures the image of a programmer as a "lion tamer", whip in one
hand, template in the other... :-)
with that happy thought I have to get back to writing some particularly complex code that has had my head spinning for the last 2 days... I'm
only sorry that I probably won't see the day when THAT code could write itself...
Pete.
--
I used to write COBOL; now I can do anything...
In the U.S. we have a store named Target and they will send you specific ads and coupons based on what you have previously purchased. They have a
program that predicts what you may be interested in buying. One father
became irate when his household received ads targeted for someone who was going to have a baby. He called up and complained because his wife was too old and his daughter was too young and no one in his household could
possibly be pregnant. Later when he talked to his wife about it he discovered that his daughter was pregnant and they had been hiding that fact from him. So Target deduced it before the father even knew about it.
In the U.S. we have a store named Target and they will send you specific ads >and coupons based on what you have previously purchased. They have a >program that predicts what you may be interested in buying. One father >became irate when his household received ads targeted for someone who was >going to have a baby. He called up and complained because his wife was too >old and his daughter was too young and no one in his household could >possibly be pregnant. Later when he talked to his wife about it he >discovered that his daughter was pregnant and they had been hiding that fact >from him. So Target deduced it before the father even knew about it.
I just like the idea that it may be possible for them to evolve into >something that can program itself and develop a completely new kind of >"intelligence".
Sysop: | DaiTengu |
---|---|
Location: | Appleton, WI |
Users: | 910 |
Nodes: | 10 (0 / 10) |
Uptime: | 242:23:37 |
Calls: | 12,117 |
Calls today: | 1 |
Files: | 186,505 |
Messages: | 2,227,357 |