• So, you call yourself a "Software Developer"?

    From pete dashwood@1:2320/100 to comp.lang.cobol on Wed Jan 25 16:30:34 2017
    From Newsgroup: comp.lang.cobol

    It's interesting to see that what I have been doing for the last half
    century is now going to change, radically...

    (Where have I heard that before?)

    Take a look at this: http://www.infoworld.com/article/3160088/application-development/want-to-be-a-software-developer-time-to-learn-ai-and-data-science.html

    I remember doing some experiments (in COBOL) back in the last century
    after I read an article on heuristic programming. The result was a
    program that could (eventually) find its way through a maze. There was
    no guarantee that the way it found was the BEST way, but, provided the
    maze COULD be solved, it would solve it.

    Heuristic programs start off with a heuristic (a "rule of thumb", if you
    like; for example: "Every time you encounter a wall, turn left".)
    Another part of the program monitors progress and checks if the same
    junction is being reached, so it detects if it is going in circles and modifies the heuristic at that point...). After a period of time or a
    number of changes of direction (decided by the programmer initially, but
    could be modified by the program in an advanced incarnation), the
    heuristic is modified and the process is iterated. Obviously, depending
    on the power of the processor, several billion alterations and tries can
    be made in a relatively "short" period of time, and, eventually a
    successful solution is reached. (The final heuristic depends on the size
    of the maze, but it will typically be a long string of "Right"s and
    "Left"s.)

    I got really excited about this and discussed it with some of my peers
    over coffee at work. The general reaction was one of horror...

    The only way to "know" how the solution was reached would be to log
    every change made (including the discarded ones) and the "audit trail" (containing billions of lines) would require several Human lifetimes to analyze, so you'd need another computer to do that...

    One of my colleagues summed up the general feeling by saying:"Pete,
    don't you find it a bit frightening that a computer can reach a result
    without anyone knowing how it got there?" (As I had programs under development for work, that did that every day [it's called:
    "Debugging"], I wasn't in the least bit phased by it...)

    I remember replying: "No, I find it exciting...".

    The applications of heuristic programming were kind of limited to things
    like production planning (I remember looking at an heuristic corrugator cardboard scheduler, written in Fortran, which determined the best
    places to cut long runs of corrugated cardboard for packaging. One of my colleagues, a young girl who had just graduated from Auckland University
    of Technology [this establishment no longer exists...] took the
    corrugator scheduler "under her wing" and became a world expert in the
    use of the package.)

    All of these events were in the early 1970s.

    They were, at least for me, the first inkling that a computer maybe did
    not have to be told how to do everything... the first glimmers of
    "artificial intelligence".

    Subsequently, as IT gathered momentum and some of the world's best and brightest were attracted to the newly emerging "Computer Science", the
    early attempts at neural networks were developed, with inconclusive
    results. There was no shortage of scoffers and naysayers, and in some
    cases, the sheer arrogance of "computer programmers" (how could an
    inanimate machine ever be able to do what I do? Without me, that machine
    is just a useless assembly of wires and metal...) made me want to punch people.

    But, today we are poised on the brink of another great leap.

    The deep networks and heuristics used in today's AI are quantum leaps
    above the first stumbling steps discussed above.

    Data CAN be used to train systems and that will just be the beginning.

    I remember writing a database management system called "DIC" - Database Interface Control - (for VSAM/KSDS on IBM mainframes), back in the late
    1970s. It was eventually superseded when the company standardized on the
    new Relational Database technology in the mid 1980s and went to DB2. But
    for around 6 years DIC managed the distribution of photo-copiers across
    14 countries in Europe and it was well received by everybody except the French. (No surprises there... :-))

    The point is that when I designed and built it I remember saying to my
    team: "Because DIC will control the interface to the data, we could
    write a module that will monitor how and where data is accessed. We
    could then use that information to manually adjust and organize the
    physical VSAM files for optimal access...or, we could let DIC interface
    to IDCAMS and do it automatically."

    A case of the actual data causing re-organization of itself, depending
    on how it was being accessed. (Most modern RDB systems have modules that
    can do this, at least at a rudimentary level. Good DBAs use this data to control physical organization of datasets.)

    I posted the link above because there may be some people here who share
    my enthusiasm for the idea of computers programming themselves. The
    article is quick to point out that a Human is required to broker the
    data being used for training, so they postulate that the ROLE of a
    programmer will change.

    "Releases will be based on the software being trained to another level; updates will be based on new data sets and experiences."

    This conjures the image of a programmer as a "lion tamer", whip in one
    hand, template in the other... :-)

    with that happy thought I have to get back to writing some particularly complex code that has had my head spinning for the last 2 days... I'm
    only sorry that I probably won't see the day when THAT code could write itself...

    Pete.
    --
    I used to write COBOL; now I can do anything...

    SEEN-BY: 154/30 2320/100 0 1 227/0
  • From Richard@1:2320/100 to comp.lang.cobol on Tue Jan 24 20:22:45 2017
    From Newsgroup: comp.lang.cobol

    On Wednesday, January 25, 2017 at 4:30:42 PM UTC+13, pete dashwood wrote:

    Auckland University
    of Technology [this establishment no longer exists...]

    You have that the wrong way around. AUT is still around. In 2000 the Auckland Technical Institute (ATI) was granted university status and renamed itself to AUT.

    So, in the 70s she would have graduated from ATI, but that still exists just renamed to AUT, and still uses the Wellesley St campus but has several others as well.

    SEEN-BY: 154/30 2320/100 0 1 227/0
  • From pete dashwood@1:2320/100 to comp.lang.cobol on Wed Jan 25 23:21:36 2017
    From Newsgroup: comp.lang.cobol

    On 25/01/2017 5:22 p.m., Richard wrote:
    On Wednesday, January 25, 2017 at 4:30:42 PM UTC+13, pete dashwood wrote:

    Auckland University
    of Technology [this establishment no longer exists...]

    You have that the wrong way around. AUT is still around. In 2000 the Auckland
    Technical Institute (ATI) was granted university status and renamed itself to AUT.

    So, in the 70s she would have graduated from ATI, but that still exists just
    renamed to AUT, and still uses the Wellesley St campus but has several others as well.

    Thanks Richard.

    I got confused from seeing "AUT" on University Challenge... :-)

    It was, of course, "ATI".

    And yes, I remember attending some COBOL-related meetings at the
    Wellesley Street campus of ATI back around that time.

    Cheers,

    Pete.

    --
    I used to write COBOL; now I can do anything...

    SEEN-BY: 154/30 2320/100 0 1 227/0
  • From docdwarf@1:2320/100 to comp.lang.cobol on Wed Jan 25 12:47:08 2017
    From Newsgroup: comp.lang.cobol

    In article <eeqkevF17rfU1@mid.individual.net>,
    pete dashwood <dashwood@enternet.co.nz> wrote:
    It's interesting to see that what I have been doing for the last half >century is now going to change, radically...

    (Where have I heard that before?)

    Take a look at this: >http://www.infoworld.com/article/3160088/application-development/want-to-be-a-software-developer-time-to-learn-ai-and-data-science.html

    [snip]

    One of my colleagues summed up the general feeling by saying:"Pete,
    don't you find it a bit frightening that a computer can reach a result >without anyone knowing how it got there?" (As I had programs under >development for work, that did that every day [it's called:
    "Debugging"], I wasn't in the least bit phased by it...)

    I remember replying: "No, I find it exciting...".

    There should be another half-century of experience under your belt, Mr Dashwood. How many times in those decades have you been excited by being
    told 'The computer tells us (blatant and obvious representation of an alternate reality which bears no relation on the one in which you believe
    to find yourself and causes you inconvenience) so it must be so'?

    For example... being required to register for school at age 108. Being notified that one is in default of a fully-paid loan. Receiving magazine subscription-notices cheerfully declaring 'Now That You're Dead, You're Eligible for a Discount!'

    In the article 'Jim McHugh, vice president and general manager for
    Nvidia's DGX-1 supercomputer', asserts 'We're using data to train the
    software to make it more intelligent.'

    The assumption that intelligence is the result of training causes me to
    grin, albeit wearily.

    There was no shortage of scoffers and naysayers, and in some
    cases, the sheer arrogance of "computer programmers" (how could an
    inanimate machine ever be able to do what I do? Without me, that machine
    is just a useless assembly of wires and metal...) made me want to punch >people.

    Wanting to punch people for arrogance is a sure sign of humility... and I
    am the King of England, God save the Me!

    DD

    SEEN-BY: 154/30 2320/100 0 1 227/0
  • From Greg Wallace@1:2320/100 to comp.lang.cobol on Wed Jan 25 17:42:11 2017
    From Newsgroup: comp.lang.cobol

    On Wednesday, 25 January 2017 13:30:42 UTC+10, pete dashwood wrote:
    It's interesting to see that what I have been doing for the last half
    century is now going to change, radically...

    (Where have I heard that before?)

    Take a look at this: http://www.infoworld.com/article/3160088/application-development/want-to-be-a-software-developer-time-to-learn-ai-and-data-science.html

    I remember doing some experiments (in COBOL) back in the last century
    after I read an article on heuristic programming. The result was a
    program that could (eventually) find its way through a maze. There was
    no guarantee that the way it found was the BEST way, but, provided the
    maze COULD be solved, it would solve it.

    Heuristic programs start off with a heuristic (a "rule of thumb", if you like; for example: "Every time you encounter a wall, turn left".)
    Another part of the program monitors progress and checks if the same
    junction is being reached, so it detects if it is going in circles and modifies the heuristic at that point...). After a period of time or a
    number of changes of direction (decided by the programmer initially, but could be modified by the program in an advanced incarnation), the
    heuristic is modified and the process is iterated. Obviously, depending
    on the power of the processor, several billion alterations and tries can
    be made in a relatively "short" period of time, and, eventually a
    successful solution is reached. (The final heuristic depends on the size
    of the maze, but it will typically be a long string of "Right"s and
    "Left"s.)

    I got really excited about this and discussed it with some of my peers
    over coffee at work. The general reaction was one of horror...

    The only way to "know" how the solution was reached would be to log
    every change made (including the discarded ones) and the "audit trail" (containing billions of lines) would require several Human lifetimes to analyze, so you'd need another computer to do that...

    One of my colleagues summed up the general feeling by saying:"Pete,
    don't you find it a bit frightening that a computer can reach a result without anyone knowing how it got there?" (As I had programs under development for work, that did that every day [it's called:
    "Debugging"], I wasn't in the least bit phased by it...)

    I remember replying: "No, I find it exciting...".

    The applications of heuristic programming were kind of limited to things
    like production planning (I remember looking at an heuristic corrugator cardboard scheduler, written in Fortran, which determined the best
    places to cut long runs of corrugated cardboard for packaging. One of my colleagues, a young girl who had just graduated from Auckland University
    of Technology [this establishment no longer exists...] took the
    corrugator scheduler "under her wing" and became a world expert in the
    use of the package.)

    All of these events were in the early 1970s.

    They were, at least for me, the first inkling that a computer maybe did
    not have to be told how to do everything... the first glimmers of
    "artificial intelligence".

    Subsequently, as IT gathered momentum and some of the world's best and brightest were attracted to the newly emerging "Computer Science", the
    early attempts at neural networks were developed, with inconclusive
    results. There was no shortage of scoffers and naysayers, and in some
    cases, the sheer arrogance of "computer programmers" (how could an
    inanimate machine ever be able to do what I do? Without me, that machine
    is just a useless assembly of wires and metal...) made me want to punch people.

    But, today we are poised on the brink of another great leap.

    The deep networks and heuristics used in today's AI are quantum leaps
    above the first stumbling steps discussed above.

    Data CAN be used to train systems and that will just be the beginning.

    I remember writing a database management system called "DIC" - Database Interface Control - (for VSAM/KSDS on IBM mainframes), back in the late 1970s. It was eventually superseded when the company standardized on the
    new Relational Database technology in the mid 1980s and went to DB2. But
    for around 6 years DIC managed the distribution of photo-copiers across
    14 countries in Europe and it was well received by everybody except the French. (No surprises there... :-))

    The point is that when I designed and built it I remember saying to my
    team: "Because DIC will control the interface to the data, we could
    write a module that will monitor how and where data is accessed. We
    could then use that information to manually adjust and organize the
    physical VSAM files for optimal access...or, we could let DIC interface
    to IDCAMS and do it automatically."

    A case of the actual data causing re-organization of itself, depending
    on how it was being accessed. (Most modern RDB systems have modules that
    can do this, at least at a rudimentary level. Good DBAs use this data to control physical organization of datasets.)

    I posted the link above because there may be some people here who share
    my enthusiasm for the idea of computers programming themselves. The
    article is quick to point out that a Human is required to broker the
    data being used for training, so they postulate that the ROLE of a
    programmer will change.

    "Releases will be based on the software being trained to another level; updates will be based on new data sets and experiences."

    This conjures the image of a programmer as a "lion tamer", whip in one
    hand, template in the other... :-)

    with that happy thought I have to get back to writing some particularly complex code that has had my head spinning for the last 2 days... I'm
    only sorry that I probably won't see the day when THAT code could write itself...

    Pete.
    --
    I used to write COBOL; now I can do anything...
    Hi Pete
    Interesting article.
    I like the idea that Data CAN be used to train systems and that will just be the beginning.
    I am a bit of a skeptic about media views of AI but can see it advancing.
    I think that Google is doing some of this today with the way a browser remembers your searches and anticipates keystrokes. For instance all I have to do is type "you" and Google anticipates www.youtube.com. This is intelligence by keeping and using data. It is even specific to my logon. There is also worldwide sharing of data so one can place a photo in Google Earth for all the world to see or find by search. I realize this is short of what you are talking
    about.
    I have a challenge for you.
    Which data structure does Google use in its search engine or does it have its own data structure?
    Is it a conventional database or discrete ISAM files or other?
    In the late 1970's, Digital Equipment Corporation (DEC) (remember them?) produced a promotional video and within they said something like the computer is the evolution of the human mind. So we humans inject our intelligence into machines. 40 years later we are still doing it as a slow process with acceleration.
    Today, I think me best internet friends are Google Chrome and YouTube and throw
    in email. I can find and learn so much because of this worldwide sharing of "searchable" data.
    Advertising is trying very hard to use AI by examining your searches and buying
    habits and targeting ads. For instance, if Coles finds you bought disposable nappies, their AI may assume there is a baby and therefore target ads for other
    baby products. The problem is linking a credit card purchase to an online user. Greg

    SEEN-BY: 154/30 2320/100 0 1 227/0
  • From pete dashwood@1:2320/100 to comp.lang.cobol on Thu Jan 26 14:58:35 2017
    From Newsgroup: comp.lang.cobol

    On 26/01/2017 1:47 a.m., docdwarf@panix.com wrote:
    In article <eeqkevF17rfU1@mid.individual.net>,
    pete dashwood <dashwood@enternet.co.nz> wrote:
    It's interesting to see that what I have been doing for the last half
    century is now going to change, radically...

    (Where have I heard that before?)

    Take a look at this:
    http://www.infoworld.com/article/3160088/application-development/want-to-be-a-software-developer-time-to-learn-ai-and-data-science.html

    [snip]

    One of my colleagues summed up the general feeling by saying:"Pete,
    don't you find it a bit frightening that a computer can reach a result
    without anyone knowing how it got there?" (As I had programs under
    development for work, that did that every day [it's called:
    "Debugging"], I wasn't in the least bit phased by it...)

    I remember replying: "No, I find it exciting...".

    There should be another half-century of experience under your belt, Mr Dashwood.

    Please don't wish that on me. I have enjoyed my career and I love what I
    do, but I don't think another 50 years would be appealing. The IT
    workplace is a demanding one and, as time has passed, I have come to
    value the tranquility of coding in an armchair (or, indeed, by a river bank...) over the frenetic stress of the cubicle and bullpen.

    How many times in those decades have you been excited by being
    told 'The computer tells us (blatant and obvious representation of an alternate reality which bears no relation on the one in which you believe
    to find yourself and causes you inconvenience) so it must be so'?

    Many, many, times, Doc. Although that is not quite the same excitement
    because it means the tedium of debugging must be undergone...

    However, having studied information theory (and read Marshall McLuhan) I
    am well aware that "the media is the message" so if it's on green
    lineflo (or, these days, a computer screen...), it must be true... :-)

    When I first encountered this idea I was sceptical... surely Managers
    couldn't be so dumb as to believe everything that came from a computer?
    Could they...?

    I changed a Management Reporting program so that it gave completely
    untrue and quite ridiculous numbers, just to see if anybody noticed...

    All of the Upper Management, with the sole exception of the CEO,
    believed the Company was facing collapse and disaster.

    The only reason I didn't get fired was because the CEO had a sense of
    humour and also realized the point I was trying to make. (To be fair, my
    Boss, after giving me a good dressing down, sprang to my defence and
    said I was a valuable employee...) I was 23 years old at the time...

    I lost an increment and some bonuses but it was worth it for the
    experience.



    For example... being required to register for school at age 108. Being notified that one is in default of a fully-paid loan. Receiving magazine subscription-notices cheerfully declaring 'Now That You're Dead, You're Eligible for a Discount!'

    I never suggested that computers are always right... :-)

    I just like the idea that it may be possible for them to evolve into
    something that can program itself and develop a completely new kind of "intelligence".



    In the article 'Jim McHugh, vice president and general manager for
    Nvidia's DGX-1 supercomputer', asserts 'We're using data to train the software to make it more intelligent.'

    The assumption that intelligence is the result of training causes me to
    grin, albeit wearily.

    I take your point. However, in this case, if the training DOESN'T result
    in "intelligence" then the algorithms used are modified and the process iterates until an intelligent result DOES eventuate.

    One way that Humans learn is by "trial and error". I see no reason why software can't do so too. The advantage that the machines have over us
    is that they can make billions of trials in a very short time and
    quickly discard what doesn't work. The advantage we have over them is
    that we perceive more than just the data being presented for the trial,
    so we can learn from fewer attempts, but there is no reason I can see,
    to assert that one approach is "better" than the other...

    There was no shortage of scoffers and naysayers, and in some
    cases, the sheer arrogance of "computer programmers" (how could an
    inanimate machine ever be able to do what I do? Without me, that machine
    is just a useless assembly of wires and metal...) made me want to punch
    people.

    Wanting to punch people for arrogance is a sure sign of humility... and I
    am the King of England, God save the Me!

    I never indulged the impulse... OK, maybe once, but that was outside
    working hours so it doesn't count... :-)

    (Besides, warnings were given and ignored, so he had it coming. He
    agreed as much the following day when I apologised, but I don't
    recommend violence as a preferred tool for Project Managers.)

    Pete.
    --
    I used to write COBOL; now I can do anything...

    SEEN-BY: 154/30 2320/100 0 1 227/0
  • From pete dashwood@1:2320/100 to comp.lang.cobol on Thu Jan 26 16:03:22 2017
    From Newsgroup: comp.lang.cobol

    On 26/01/2017 2:42 p.m., Greg Wallace wrote:
    On Wednesday, 25 January 2017 13:30:42 UTC+10, pete dashwood wrote:
    It's interesting to see that what I have been doing for the last half
    century is now going to change, radically...

    (Where have I heard that before?)

    Take a look at this:
    http://www.infoworld.com/article/3160088/application-development/want-to-be-a-software-developer-time-to-learn-ai-and-data-science.html

    <snipped>
    Hi Pete

    Interesting article.


    Thanks, I'm glad you enjoyed it. I debated whether to post it to my
    account on LinkedIN but it's really more oriented to software developers
    and CLC is a place I really value as one of the last uncensored forums
    on the planet. I'd hate to see it disappear, even though the original
    reasons I came here have long been overtaken by events.


    I like the idea that Data CAN be used to train systems and that will just be
    the beginning.

    I am a bit of a skeptic about media views of AI but can see it advancing.

    Sure, we need to balance what we get through media, against experience
    and common sense...

    I think that Google is doing some of this today with the way a browser
    remembers your searches and anticipates keystrokes. For instance all I have to do is type "you" and Google anticipates www.youtube.com. This is intelligence by keeping and using data. It is even specific to my logon. There is also worldwide sharing of data so one can place a photo in Google Earth for all the world to see or find by search. I realize this is short of what you are talking
    about.

    It is a fair, if basic, example.

    I have a challenge for you.
    Which data structure does Google use in its search engine or does it have its
    own data structure?

    It is a proprietary file structure distributed across millions of servers.

    Obviously, in the highly competitive world of SEO, GOOGLE play their
    cards close to their chest and details of their algorithms are closely guarded. (The main search algorithms change quite frequently, partly
    because they need to and partly to prevent reverse engineering and
    people gaining unfair advantage.)

    What we can say with a fair amount of certainty is that the engine falls
    into two parts: Crawlers and Indexers.

    There are MANY instances of each of these components, working 24/7. (The GOOGLE engines were originally developed by Larry Page and Sergey Brin
    and they called it "Back Rub" because it was designed to explore back
    links on the web. The programming languages were Java and Python. Python
    is still used today for Crawlers, but the indexers are now in C/C++. The
    use of OO languages facilitates the instancing of MANY objects
    (components) from a single Class template.)


    Is it a conventional database or discrete ISAM files or other?

    It is "other". The structure used by the indexers is proprietary to
    GOOGLE. These structures are held on the server where the page resides
    and that is part of what makes the searches so fast.


    In the late 1970's, Digital Equipment Corporation (DEC) (remember them?)

    Yes, I remember DEC VAX and even back to PDP 8... did you know that in
    the 70s the US could not export computer technology to countries that
    were considered "unfriendly"? After closing a multi-million dollar deal
    with a couple of Eastern European nations, this particular Law
    threatened to be a show-stopper. So, they said: "It's NOT a computer;
    it's a "Programmable Data Processor"... " The sale went through.

    I'm writing this on an HP laptop; Compaq acquired DEC, who in turn are
    owned by HP... the wheel keeps turning.

    produced a promotional video and within they said something like the computer is the evolution of the human mind. So we humans inject our intelligence into machines. 40 years later we are still doing it as a
    slow process with acceleration.

    I'm not sure that that is a fair way to put it, because I believe the
    machines will have a different "intelligence" to that which we have, but
    it is certainly true that, whatever they end up with, will have been
    initially sparked by Humans.

    Today, I think me best internet friends are Google Chrome and YouTube and
    throw in email. I can find and learn so much because of this worldwide sharing of "searchable" data.

    In the latter part of the Middle Ages gentlemen went to a "University"
    where they acquired a universal education... everything from Latin &
    Greek, through history and Mathematics, to things like Fencing, Manners
    and Drawing. The sum total of Human knowledge could fit into a few
    University libraries... If you studied for a lifetime you could end up
    with a fair handle on just about everything that was known. Graduates
    could be considered to be "spherical" in their education.

    By the beginning of the 19th century, the "knowledge explosion" was
    beginning and it is continuing even as I write this. The Sum total of
    Human knowledge is now so vast that all the libraries in the world can't contain it, because books cannot be written and printed fast enough to
    keep up with it. It is no longer possible to get a "spherical"
    education; you have to specialize. Graduates now are generally
    "conical", pointed to a particular specialization... :-)

    The only thing that even gives us a chance of not losing everything we
    have learned and reverting back to the Dark Ages, is the storage and distribution of information on electronic media as soon as it becomes available.

    The Internet (a network of networks) may well be a two-edged sword but
    life as we currently enjoy it would be impossible without it.

    Advertising is trying very hard to use AI by examining your searches and
    buying habits and targeting ads. For instance, if Coles finds you bought disposable nappies, their AI may assume there is a baby and therefore target ads for other baby products.

    Exactly. Although we all bitch about ads, they are one of the things
    that finances and drives the advancement of the Internet.

    The problem is linking a credit card purchase to an online user.

    Greg

    Thanks for your post, Greg.

    Pete.
    --
    I used to write COBOL; now I can do anything...

    SEEN-BY: 154/30 2320/100 0 1 227/0
  • From Charles Hottel@1:2320/100 to comp.lang.cobol on Wed Jan 25 23:31:14 2017
    From Newsgroup: comp.lang.cobol


    "Greg Wallace" <gregwebace@gmail.com> wrote in message news:366cacfb-5c40-4470-8b4a-225e9a03e04a@googlegroups.com...
    On Wednesday, 25 January 2017 13:30:42 UTC+10, pete dashwood wrote:
    It's interesting to see that what I have been doing for the last half
    century is now going to change, radically...

    (Where have I heard that before?)

    Take a look at this: http://www.infoworld.com/article/3160088/application-development/want-to-be-a-software-developer-time-to-learn-ai-and-data-science.html

    I remember doing some experiments (in COBOL) back in the last century
    after I read an article on heuristic programming. The result was a
    program that could (eventually) find its way through a maze. There was
    no guarantee that the way it found was the BEST way, but, provided the
    maze COULD be solved, it would solve it.

    Heuristic programs start off with a heuristic (a "rule of thumb", if you like; for example: "Every time you encounter a wall, turn left".)
    Another part of the program monitors progress and checks if the same
    junction is being reached, so it detects if it is going in circles and modifies the heuristic at that point...). After a period of time or a
    number of changes of direction (decided by the programmer initially, but could be modified by the program in an advanced incarnation), the
    heuristic is modified and the process is iterated. Obviously, depending
    on the power of the processor, several billion alterations and tries can
    be made in a relatively "short" period of time, and, eventually a
    successful solution is reached. (The final heuristic depends on the size
    of the maze, but it will typically be a long string of "Right"s and
    "Left"s.)

    I got really excited about this and discussed it with some of my peers
    over coffee at work. The general reaction was one of horror...

    The only way to "know" how the solution was reached would be to log
    every change made (including the discarded ones) and the "audit trail" (containing billions of lines) would require several Human lifetimes to analyze, so you'd need another computer to do that...

    One of my colleagues summed up the general feeling by saying:"Pete,
    don't you find it a bit frightening that a computer can reach a result without anyone knowing how it got there?" (As I had programs under development for work, that did that every day [it's called:
    "Debugging"], I wasn't in the least bit phased by it...)

    I remember replying: "No, I find it exciting...".

    The applications of heuristic programming were kind of limited to things
    like production planning (I remember looking at an heuristic corrugator cardboard scheduler, written in Fortran, which determined the best
    places to cut long runs of corrugated cardboard for packaging. One of my colleagues, a young girl who had just graduated from Auckland University
    of Technology [this establishment no longer exists...] took the
    corrugator scheduler "under her wing" and became a world expert in the
    use of the package.)

    All of these events were in the early 1970s.

    They were, at least for me, the first inkling that a computer maybe did
    not have to be told how to do everything... the first glimmers of
    "artificial intelligence".

    Subsequently, as IT gathered momentum and some of the world's best and brightest were attracted to the newly emerging "Computer Science", the
    early attempts at neural networks were developed, with inconclusive
    results. There was no shortage of scoffers and naysayers, and in some
    cases, the sheer arrogance of "computer programmers" (how could an
    inanimate machine ever be able to do what I do? Without me, that machine
    is just a useless assembly of wires and metal...) made me want to punch people.

    But, today we are poised on the brink of another great leap.

    The deep networks and heuristics used in today's AI are quantum leaps
    above the first stumbling steps discussed above.

    Data CAN be used to train systems and that will just be the beginning.

    I remember writing a database management system called "DIC" - Database Interface Control - (for VSAM/KSDS on IBM mainframes), back in the late 1970s. It was eventually superseded when the company standardized on the
    new Relational Database technology in the mid 1980s and went to DB2. But
    for around 6 years DIC managed the distribution of photo-copiers across
    14 countries in Europe and it was well received by everybody except the French. (No surprises there... :-))

    The point is that when I designed and built it I remember saying to my
    team: "Because DIC will control the interface to the data, we could
    write a module that will monitor how and where data is accessed. We
    could then use that information to manually adjust and organize the
    physical VSAM files for optimal access...or, we could let DIC interface
    to IDCAMS and do it automatically."

    A case of the actual data causing re-organization of itself, depending
    on how it was being accessed. (Most modern RDB systems have modules that
    can do this, at least at a rudimentary level. Good DBAs use this data to control physical organization of datasets.)

    I posted the link above because there may be some people here who share
    my enthusiasm for the idea of computers programming themselves. The
    article is quick to point out that a Human is required to broker the
    data being used for training, so they postulate that the ROLE of a
    programmer will change.

    "Releases will be based on the software being trained to another level; updates will be based on new data sets and experiences."

    This conjures the image of a programmer as a "lion tamer", whip in one
    hand, template in the other... :-)

    with that happy thought I have to get back to writing some particularly complex code that has had my head spinning for the last 2 days... I'm
    only sorry that I probably won't see the day when THAT code could write itself...

    Pete.
    --
    I used to write COBOL; now I can do anything...

    Hi Pete

    Interesting article.

    I like the idea that Data CAN be used to train systems and that will just be the beginning.

    I am a bit of a skeptic about media views of AI but can see it advancing.

    I think that Google is doing some of this today with the way a browser remembers your searches and anticipates keystrokes. For instance all I have
    to do is type "you" and Google anticipates www.youtube.com. This is intelligence by keeping and using data. It is even specific to my logon.
    There is also worldwide sharing of data so one can place a photo in Google Earth for all the world to see or find by search. I realize this is short of what you are talking about.

    I have a challenge for you.
    Which data structure does Google use in its search engine or does it have
    its own data structure?
    Is it a conventional database or discrete ISAM files or other?


    In the late 1970's, Digital Equipment Corporation (DEC) (remember them?) produced a promotional video and within they said something like the
    computer is the evolution of the human mind. So we humans inject our intelligence into machines. 40 years later we are still doing it as a slow process with acceleration.

    Today, I think me best internet friends are Google Chrome and YouTube and throw in email. I can find and learn so much because of this worldwide
    sharing of "searchable" data.

    Advertising is trying very hard to use AI by examining your searches and buying habits and targeting ads. For instance, if Coles finds you bought disposable nappies, their AI may assume there is a baby and therefore target ads for other baby products. The problem is linking a credit card purchase
    to an online user.

    Greg


    In the U.S. we have a store named Target and they will send you specific ads and coupons based on what you have previously purchased. They have a
    program that predicts what you may be interested in buying. One father
    became irate when his household received ads targeted for someone who was going to have a baby. He called up and complained because his wife was too old and his daughter was too young and no one in his household could
    possibly be pregnant. Later when he talked to his wife about it he
    discovered that his daughter was pregnant and they had been hiding that fact from him. So Target deduced it before the father even knew about it.

    SEEN-BY: 154/30 2320/100 0 1 227/0
  • From pete dashwood@1:2320/100 to comp.lang.cobol on Thu Jan 26 19:20:42 2017
    From Newsgroup: comp.lang.cobol

    On 26/01/2017 5:31 p.m., Charles Hottel wrote:
    <snipped>


    In the U.S. we have a store named Target and they will send you specific ads and coupons based on what you have previously purchased. They have a
    program that predicts what you may be interested in buying. One father
    became irate when his household received ads targeted for someone who was going to have a baby. He called up and complained because his wife was too old and his daughter was too young and no one in his household could
    possibly be pregnant. Later when he talked to his wife about it he discovered that his daughter was pregnant and they had been hiding that fact from him. So Target deduced it before the father even knew about it.


    Is it insecurity or spiteful shadenfreude that makes most of us happy
    when a machine screws up? A little reminder that Humans are still
    smarter... even though there are some VERY dumb Humans about (see the
    Darwin awards...). We conveniently forget that, currently, most of the programming is done by Humans...

    The day will come when the machines WON'T screw up (it might well
    coincide with the time when they no longer need Humans to program
    them...) and you then have a choice of accepting you have been
    superseded (and facing the consequent pointlessness of your existence),
    or throwing a party for them... :-)

    Personally, I like the party idea, but I doubt I'll be here when it happens.

    Pete.
    --
    I used to write COBOL; now I can do anything...

    SEEN-BY: 154/30 2320/100 0 1 227/0
  • From docdwarf@1:2320/100 to comp.lang.cobol on Thu Jan 26 13:41:06 2017
    From Newsgroup: comp.lang.cobol

    In article <s4KdnYf01coa5hTFnZ2dnUU7-QXNnZ2d@earthlink.com>,
    Charles Hottel <chottel@earthlink.net> wrote:

    [snip]

    In the U.S. we have a store named Target and they will send you specific ads >and coupons based on what you have previously purchased. They have a >program that predicts what you may be interested in buying. One father >became irate when his household received ads targeted for someone who was >going to have a baby. He called up and complained because his wife was too >old and his daughter was too young and no one in his household could >possibly be pregnant. Later when he talked to his wife about it he >discovered that his daughter was pregnant and they had been hiding that fact >from him. So Target deduced it before the father even knew about it.

    When this story came out last year my thought was 'if this is true then
    the father or the daughter stands to make a Great Deal of Money by
    exploiting these circumstance. 'Hi, I'm Barbara/Bobby and Target knows me
    SO well that...'

    DD

    SEEN-BY: 154/30 2320/100 0 1 227/0
  • From docdwarf@1:2320/100 to comp.lang.cobol on Thu Jan 26 13:44:11 2017
    From Newsgroup: comp.lang.cobol

    In article <eet3eeF8ikpU1@mid.individual.net>,
    pete dashwood <dashwood@enternet.co.nz> wrote:

    [snip]

    I just like the idea that it may be possible for them to evolve into >something that can program itself and develop a completely new kind of >"intelligence".

    Many likeable ideas are problematic in practise; 'always act justly' seems downright fine until a situation needs mercy.

    DD

    SEEN-BY: 154/30 2320/100 0 1 227/0