Welcome to Keen Software House Forums! Log in or Sign up to interact with the KSH community.
  1. You are currently browsing our forum as a guest. Create your own forum account to access all forum functionality.

Space Engineers, GoodAI and KSH in general

Discussion in 'General' started by Logi, Nov 29, 2017.

Thread Status:
This last post in this thread was made more than 31 days old.
  1. halipatsui Senior Engineer

    Messages:
    1,253
    #heroesofmightandmagic
     
  2. Devon_v Senior Engineer

    Messages:
    1,602
    Funny thing, when I was much younger I used to complain all the time that the computer was cheating, and my mother would tell me that wasn't possible because the computer just follows rules. :)
     
  3. FoolishOwl Junior Engineer

    Messages:
    507
    The entire GoodAI project is overoptimistic. Most often in the documents on the GoodAI site refer to a goal of developing general AI or general-purpose AI, which is a very ambitious goal. But in that a few places, they refer to their goal as "human-level general purpose AI", and it seems to rely on naive assumptions: that the meaning of natural language expressions can be completely analyzed in terms of first order logic; that the computational theory of mind completely explains human consciousness. It pretty much reads like a reversion to the ideas about AI research of the 1960s, that have long since been abandoned; those already were reversions to ideas that analytic philosophy aggressively pursued, until abandoning them as conceptually flawed, in the early 20th century.
     
    Last edited: Dec 4, 2017
    • Informative Informative x 1
  4. Malware Master Engineer

    Messages:
    9,552
    I don't have near enough information on what they're actually doing to judge - so I won't. But pretty much all these AI researchers have what I personally would consider lofty goals. But I do believe artificial intelligence is theoretically possible, simply because the concept of intelligence is proven, in us - thus it must be replicable. Whether or not we'll be able to solve all the problems before we ourselves become extinct - that is a completely different matter.
     
  5. FoolishOwl Junior Engineer

    Messages:
    507
    I agree with this part.

    My criticism was that, judging by GoodAI's roadmap and other statements, it looks like they're adhering to a particular model of consciousness, the computational theory of mind, that posits you can abstract minds from bodies, and treat minds as formal systems of manipulation of representations and symbols. I'd argue that's only an aspect of consciousness, not the whole of it.

    An example is that their roadmap has a section devoted to language acquisition, that proceeds from parsing the formal structure of simple sentences, to inferring meaning. That's an enormous leap, and incidentally exactly the idea John Searle criticized in his "Chinese Room" thought experiment.

    Anyway, there are obvious ways in which research into heuristics for 2D and 3D navigation, and language analysis, are useful. I just don't see them as the direct route to reproducing human consciousness.
     
    Last edited: Dec 4, 2017
    • Informative Informative x 2
  6. Lord Grey Apprentice Engineer

    Messages:
    344
    A little off-topic. Just something you should keep in mind if you talk about AIs. Not for the faint hearted. It's still fictive as far as I know, and hopefully will stay that way.
     
  7. Sinbad Senior Engineer

    Messages:
    2,788
    beyond the fact that we already make weapons that can choose when and how to strike a target (have a look at modern torpedo and missile guidance systems as a start), public opinion is only part of the issue. military budgets can accelerate research considerably, but usually only do so for military goals. the way we combat this is to beat the military to the punch. companies like Deepmind, Google (their AI goals go beyond better targeted advertising), GoodAI, OpenAI, heck even Apple, Microsoft and IBM are all trying to produce an AGI that is aimed at the more positive end of potential applications for AI. that's not counting all the university projects, high school projects and even lone programmers having a crack at it. which raises another point: who would you rather get it first, a military who faces government and public ethical oversight, or some brilliant angst filled teen that stumbles upon that missing piece with no oversight at all?
    i subscribe to the 'rapid and cataclysmic change' argument. basically, any general AI that is as intelligent as a human will be able to improve itself beyond human intelligence faster than we could improve it. probably much faster, and before we notice it is doing it. a few days, weeks or months after it first wakes up it will not only have crashed any other AI projects in a bid for its own survival, but also have bootstrapped itself beyond the collective intelligence of mankind. then we either become immortal gods with an even more godlike benefactor, or we go extinct for one reason or another. the outcome depends on too many factors that we just cant pin down well enough to try to guess at the outcome. and its going to happen. there are too many organisations trying for it, registered or not, for it to fail. its just a matter of when, and who presses the button. we can only hope they have taken every precaution they can can think of to try to sway the outcome in our favour.
    AGI is the last thing we will ever invent. interpret that as you like.


    as for the Chinese room, its lesson is generally misunderstood. neither the person in the room, nor the instruction set speak Chinese. but then again, no individual part of a Chinese person speaks Chinese either. only the whole can be said to seem to speak Chinese. meaning the room, the person inside and the instruction set together make a system that for all outward appearances, speaks Chinese. that's no different from a Chinese person except we are aware of how each of the parts of the room operate. what Searle was trying to say with the thought experiment was that there is no one part that can be said to 'understand', but the complexity of the system as a whole can exhibit all the properties of 'understanding' to such a fidelity that its indistinguishable. it wasn't meant as a comment on the futility of AI, or a particular approach to AI. it was meant as a comment on the concept that there is a qualitative difference between a perfect simulation and the genuine article. the short point being: there isn't one, so who can prove that what we call consciousness or awareness is actually a unique definable and testable state, especially when divorced of biology? for all we know, your head is an English room. it all starts getting uncomfortably existential at that point. :(
     
  8. FoolishOwl Junior Engineer

    Messages:
    507
    Here's the Chinese Room thought experiment, as Searle summarized it.
    Here, you're basically restating the Turing Test. But Searle was directly criticizing it, arguing that it was inadequate as an indication of consciousness.
    As far as the room is concerned -- which, by the way, is supposed to be a model of a computer running a program -- what you're saying is what Searle called The Systems Reply: the idea that the room as a whole understands Chinese, even though no specific part of it does. Searle was arguing that the system as a whole does *not* understand Chinese, only formal rules of syntax, and that complete knowledge of syntax does not confer knowledge of semantics.

    I believe you're right to say that "no individual part" of a person understands language, but that's a somewhat different matter. As I was saying earlier, I think the computational theory of mind explains aspects of consciousness, but not the whole of it. The whole of it involves a great deal that can't simply be reduced to a formal system of manipulation of symbols. If anything, my main criticism of Searle's argument (and one that led me to misunderstand it for a long time) is that by conceding a program could completely understand syntax, he concedes too much. I don't think the syntax of natural languages can be understood in isolation from semantics. But, that's to say that you can't even get as far as the Chinese Room.

    The funny thing is, arguably, we had computer programs that passed the Turing Test decades ago, starting with ELIZA, the ancestor of today's much more sophisticated chatbots. That is, at least sometimes, people will interact with such things and feel they're interacting with a person. But I believe this is less about sophisticated software approaching human-like consciousness, and more about humans having a strong proclivity to anthropomorphosize: to interpret phenomena as if it were human behavior, to look for ourselves in the world around us. This is most often cited as a weakness, as a bias in our perceptions, but I think it is also one of our greatest strengths. In the first place, we're social animals, perceiving other humans as like ourselves, with internal states similar to our own. This, I think we were able to extend to the animals we domesticate; we've had most success with complex social animals that are fairly like ourselves. And then there's our ability to empathize with fictional characters and so imagine complex narratives and alternative models for the world. And, notice how often we treat tools as if they were literally parts of our bodies, and how we design complex machines, structures, and vehicles to resemble living things. This often works fairly well, and makes it easy for other people to understand what we've created. Where's the face on a car? Where are its legs? And how easy was it to answer that? And we talk about software and the like as if it had intentions, even though we know it doesn't: "Steam wants a password".

    As Malware has been pointing out, the concept of a general AI that GoodAI is presumably working on, is quite a different beast from the sort of AI we'd like to see within a computer game. But, that's because a computer game AI can "fake it" in a lot of ways, and it'll still feel like intelligent behavior.
     
    Last edited: Dec 5, 2017
  9. Sinbad Senior Engineer

    Messages:
    2,788
    @FoolishOwl
    you seem to indicate there is some ineffible quality of the human mind that makes us special. how magicaly delightful that we are so blessed to be raised above the animals. surely no mere machine could ever capture the magnificent essence that makes us so special.
    I just can't believe that. we are electrochemical machines. wonderfully complex, but programed and dictated no less than what I'm currently cradling in my mutated ape hand.
    if I told a computer to bring me a banana and it minutes later I'm holding one, does it matter if it knows what a banana is? if I had a bowl of wax fruit and a bowl of real fruit in the kitchen I might be holding a bent yellow candle right now. but then again, if I had asked a ten year old human to get me a banana in that situation I think I would be equaly as likely to be in the same fruitless position.
    my dog doesn't know the meaning of the word sit, but will still carry out the appropriate action when commanded to sit. It apears she understands because her actions are a suitable response to the word.
    on the other hand if I ask my computer what the weather is like, I am told todays weather for my current location. the reply is appropriate to the querie. to be sure my phone can't possibly know what weather is, or that I meant I wanted to know about todays conditions that are likely to impact me, yet I'm not planning to carry an umbrella when I go out. I have the exact information I asked for.
    the dog, the phone, even my ten year old daughter. which one appears to understand and which one really understands? is there a difference?
    the only one I can see is that in two of them we can't understand the complete data path between request and action. these we lable as understanding. the other we can follow it through from cause to effect and never bump into something that understands.
    as we learn more about the brain and how it works will we come to the same conclusion about my dog and daughter? I believe we will.
    I think in our quest to make AI we will strip ourselves of the last remenants of our specialness as its all laid bare and exposed as just a neural network, preprogramed by genetics and altered by our environments to provide appropriate responses to stimulus.
    there is no special understanding, just appearant understanding. and thats good enough for me.
     
    • Agree Agree x 1
  10. FoolishOwl Junior Engineer

    Messages:
    507
    No, I'm simply arguing that there's a great deal more to consciousness than can be described simply in terms of formal systems of symbolic manipulation. And this doesn't mean we're "above the animals"; I believe that the consciousness of other complex animals is generally similar to ours. And for that matter, I agree that life can be described as very complex machinery. What I disagree with is the idea that consciousness can simply be reduced to a formal system of symbolic manipulation. That is, I don't think any amount of analysis of sentence structure will lead to understanding of what those sentences mean. That's an enormous leap. There's much more to the problem than that.
     
    • Agree Agree x 1
    • Friendly Friendly x 1
  11. Sinbad Senior Engineer

    Messages:
    2,788
    @FoolishOwl im not disagreeing that a formal system of symbolic manipulation is inadequate, im disagreeing that engineered understanding is a necessary step and that we dont have any definition of 'understanding' that differs from what a state machine voice interface like siri or cortana can demonstrate. in my last example i used the task of retrieving a banana to illustrate this. the machine has a definition of what a banana is, and is able to to understand 'bring me'. it can even understand that i want 'one'. how is that different to my daughter understanding the same request? that difference doesn't effect whether the instruction is carried out, so does the difference matter? if i didnt get a banana at all, or if i got 27 of them, then it obviously doesn't understand the request. but if i do get a banana, i think it understands enough to be said to have understood.
    that said, i think the way forward is through neural networks. at the moment they are limited in scope, size and adaptability. but thats improving every year, i think the focus needs to move from 'smart computer' to 'non-biological brain'. and i dont necessarily mean modelling a biological brain as that would also require simulating the millions of sensory inputs and motor control outputs of a biological brain. i think we need to start from scratch and use biological evolution as an example of how a separate machine evolution needs to proceed. start with cameras instead of retinas, motors instead of muscles, start with a blank network with a bit of rudimentary structure to bias its initial reactions in the direction we want it to proceed. let it wander around and try to teach it like you would an infant animal. see what happens. if we get one to the 'talking and making sense' stage, then add in the machine equivalent of a direct neural interface and turn it into a 'smart computer in a box'. i think researchers are chasing ghosts looking to emulate such vague notions as consciousness or understanding. i believe these are emergent from any sufficiently complex neural network and no special mechanism need be engineered to achieve an effect that fits the definition.
    and i know its not a popular point of view in the field at the moment, but i really do think biomimetics can be extended into neuro-biomimetics to our advantage.
     
    • Like Like x 1
    • Agree Agree x 1
  12. FoolishOwl Junior Engineer

    Messages:
    507
    Incidentally, while re-reading some stuff on this subject, I came across references to a science fiction novel I read a few years ago: Blindsight, by Peter Watts. It's excellent, and if you've actually read this thread, you'd probably enjoy it.
     
    • Informative Informative x 1
  13. chrisb Senior Engineer

    Messages:
    1,458
    I would just like to see rudimentary ai/npc in SE. That would be nice.
    I know we have the remote and set wp things. But to have crew just pottering around would help fill the worlds, certainly for sp.
     
    • Agree Agree x 2
  14. Merandix Junior Engineer

    Messages:
    519
    A) What do those files actually DO?

    B) I honestly would expect a game selling 2,4 million copies to have a couple more programmers on first glance. But on the other hand, and that's my own observation which may as well be wrong: I do have the feeling that the company currently isn't too well equipped in terms of tools for a much larger team. With their system of branching and branch merging, I think a much larger team would slow development rather than accelerate it. I think it was a conscious decision for Keen to keep the team small and nimble, stick to the tools they have and don't waste time and money transitioning the game to another method of branching.

    A disadvantage is that some large projects (like rewriting physics) will take longer to complete, due to the lower amount of people working on it. I see smaller teams on other games appear to be more productive than Keen, but those generally have a much more rigid road-map. I also see Keen taking its time to adapt the code-base to the current vision of the project, which is more than commendable. They are willing to risk things by essentially recreating large parts of the game that already exist. The public reception of physics proves this. Keen made a massive effort, but a huge risk in this is that the playerbase sees it as a 'bugfix'. It's huge, but it's being received as something minor, because physics isn't a new feature.

    I think Keen can be proud on what they did. Yes, some errors were made in the early days of the game. But those minor errors grew in size as development progressed. I think they are currently on their way of solving those problems. It feels as though they are on a solid roadmap now, but they need to fix the codebase to conform to that, and that takes time.

    A huge disadvantage of a larger team is that it also goes through money a lot faster.

    C) Turnover in programming is always high. A friend of mine gets continuous job offers, even while employed elsewhere, from all over the country. Most people have some sense of loyalty and they want to get some sense of belonging, and most offers are more or less identical, so you don't leave your current job if the offer isn't a LOT better than your current job... but that does cause a high turnover.

    ---------------------------
    My beliefs about AI. Which may be somewhat off-topic. Hence spoiler.
     
  15. Bumber Senior Engineer

    Messages:
    1,018
     
    Last edited: Dec 21, 2017
  16. halipatsui Senior Engineer

    Messages:
    1,253

    Actually population increase is slowing down afaik.
    Nonetheless final count will be quite quite high.

    People running out of jobs was a question during industralization era too. It was feared that people will run out of jobs.
    Some people were left without job but in the long run standards of living were increased and new jobs were formed.

    I wouldnt be surprised if handcraft jobs etx would make a comeback im the future. More of stuff that has emotional connevtion since powerful AI assisted industries can provide everything really essential with handful of people.

    If AI ends up surpassing human as species taking top of scientific research and engineering etc etc human might quite quickly just become a pet that has to be kept alive for the ai.

    But who knows what is happening lol.

    Why it is not wise to use meat of male deer in bakery?

    Becouse most people dont enjoy swallowing a buckcake
     
  17. atcrulesyou Trainee Engineer

    Messages:
    44
    I must say, this has been an absolutely fascinating thread to read. You all have some great insights.

    I do have a question though:
    Isn't carrying out the appropriate action to a command, an understanding of the word? At least on a basic level? I do see what you're getting at, but this sounds like a pretty weak analogy to me. It's also possible this conversation is so far over my head that I'm picking apart an analogy for no reason...
     
  18. Lord Grey Apprentice Engineer

    Messages:
    344
    Well, there's a phrase: Just that a Parrot can talk doesn't mean he's intelligent. The problem here is the definition of intelligence. You see, when talking about machine intelligence, I divide between virtual intelligence and artificial intelligence. Games use virtual intelligences, a program that can adapt to a certain situation within its programed frame. It cannot break out of this frame, cannot add additional functions to it. It can be very complex to the point where it can chat like a human, but it will always give the same answer to similar questions. A true AI will be able to invent new answers that are also correct, to create a variety of answers with the same meaning without have it programmed.
    But on the other side, why AI? Humans always say they want products that can think for themselves. What they really want are products that do what they're told! (Florence Ambrose, chief engineer of the savage chicken)
     
  19. Malware Master Engineer

    Messages:
    9,552
    Weeeeeelll.....

    Certain parrots are considered to be quite intelligent... :p
     
  20. Sinbad Senior Engineer

    Messages:
    2,788
    thats exactly the point I was trying to make :D
    'understanding' is a vague goal. cause and effect we can accomplish though. if the output is appropriate for the input it doesn't matter if the process 'understands'.
     
  21. chrisb Senior Engineer

    Messages:
    1,458
    I used to like messing around with Arma ai, it was fasinating stuff.

    In SE here, 'just crew would do'.. Poetic ;) but true
    Crew that just sit, sitting looking as though they're pressing something would be o.k. Not that complicated to do, animations really, no actual ai involved, just a few characters with little animation movements stuck in a chair at a console. Its not really ai.
     
  22. atcrulesyou Trainee Engineer

    Messages:
    44
    Like a Star Trek background character! That wouldn't even need AI! I like where your head's at.

    I think that explanation suffices for me. I think I was hung up on the difference between "Understanding" and "the appearance of understanding". Just like I am "appearing to understand" any of this. :p
     
Thread Status:
This last post in this thread was made more than 31 days old.