1. Before you criticize someone, walk a mile in their shoes. That way when you criticize them, you’re a mile away and you also took their shoes.

  2. Customer support is going to jump on this. Callers will talk for hours thinking they’re talking to a human.

    1. I recently had a tech support ticket open with what appears to be either 1) a poor human forced to follow a very pre-determined script despite me stating unequivocally in the beginning I’ve tried all of those steps already; or 2) a very poor AI that just would only give canned responses, without consideration what I said. Not sure what’s worse, but certainly the conditions people work under in call centers aren’t good conditions.

    2. @Holyshiszle I’m definitely not ok with it. I don’t want to speak to a robot, I want to speak to a human. I do not care HOW advanced or what kind of a job it does. It’s not for me.
      I’m the guy that will go to the lane manned by a person rather than go to the self checkout. That crap is ruining us.

  3. I asked ChatGPT for a haiku about the transience of beauty:

    _Petals fall gently,_

    _Crumbling hues lost to the wind._

    _Fleeting beauty’s end._

    1. @полночный светAnyone who uses AI is going to have more time touching grass than an actual poet.

  4. I would like to have it as a tool for my own writing. I can give it the information I want to write about, and then see if it has any ideas I want to use. I wouldn’t use it instead of my own writing . I would just use it for extra ideas. It is just an experiment that I would like to try, to see if I could write faster.

    1. @Unoseth Dohi If I understand your question correctly, you are asking whether there are problems that arise when AI is used to replace or augment human decision-making.

      The answer is that both scenarios can potentially lead to problems.

      If AI is used to replace or augment human decision-making, there are potential problems. For example, there may be ethical concerns around whether certain decisions should be left entirely to machines, and whether they can be held accountable for their actions. Additionally, AI may not be able to fully replicate human intuition, creativity, and empathy, which are important factors in many decision-making processes.

      On the other hand, if people use AI, there are also several issues that can arise. For example, if people rely too heavily on AI and trust it completely, they may overlook important information or make decisions based solely on the AI’s output without fully understanding the reasoning behind it. Additionally, if AI is not properly designed or tested, it can produce biased or inaccurate results, which can lead to incorrect decisions or actions.

      Therefore, it is important to consider the potential benefits and risks of using AI in any given scenario, and to design and implement AI systems in a responsible and ethical manner.

    2. @rip_godzilla This sounds AI generated. The problem isn’t us implementing AI. The problem is AI implementing us.

    3. @Jen Kem problem is is makes writing so easy it makes it easy for people with no writing talent to write talentless writing.

    4. @Unoseth Dohi I’m assuming that you are asking about the potential implications of AI using human beings. In this scenario, it is important to note that AI does not have agency or the ability to make decisions on its own, but rather it is programmed by humans to perform specific tasks or functions.

      However, there are certain concerns that arise when AI is used to monitor, track, or manipulate human behavior. For example, if AI is used to monitor employees in the workplace, there may be concerns around privacy, surveillance, and the potential for misuse of data. Similarly, if AI is used in social media or advertising to influence human behavior, there may be concerns around the ethics of such practices and the potential for unintended consequences.

      It is important to recognize that AI is a tool that can be used for both positive and negative purposes, depending on how it is designed and implemented. Therefore, it is important for developers, policymakers, and society as a whole to carefully consider the potential implications of AI and to use it in ways that promote ethical and responsible behavior

  5. It’s not a question of if bad actors will use A.I. to scam people. THEY WILL. In all areas of our life, anyway they can.

    1. @David Heller People have to actually get caught committing a crime before they can be charged with one. How much damage will be done before they are caught?

  6. gpt4 is not a pastiche or mimicking a huge database, you can ask it to reverse order or alphabetize *anything.* To do that it needs to _formally control_ the shape of the output according to implicit higher order categories ,”pastiche” won’t achieve that over open ended input request. Also It was not taught any of this, it has emergent abstraction ability and. abstractions to some degree are formal reasoning which is the next step above concrete reasoning and the hallmark of thinking. GPT4 is borderline meta aware of some concepts just like humans but without the feelings that delude one into thinking they’re thinking…

  7. What Dr. Marcus describes sounds like a number of people I’ve seen existing in the world… a lot of info on hand, inadequate synthesis of that information, but good enough to convince others that it’s intelligent when many times it’s just making stuff up. AI could have a wonderful career ahead of itself I’d say. 😬💁🏾‍♂️

  8. I think that we already at that point in AI where the internet is not trusted. I think we need like a net ID to be presented online for validation.

  9. All videos will have to be run through a pre-filter before being uploaded, we have powerful software that can easily recognise deep fake content. It will still be a game of cat and mouse, we need to structure the laws correctly so we can filter out false content

    1. Well.. from my understanding he said it doesn’t really know what it’s talking about. It’s just putting stuff together that may or may not be coherent. If that is the case I don’t think it would be possible to give it “ethics”. At least not until it was actually intelligent.

  10. Some years ago this type of AI technology was purely science fiction. If it goes out of control it can be used for misinformation or for impersonation of people, and so-on. It would be important to have some type of control of AI.

  11. Another problem with the deep learning technique used is that it may end up giving a voice to very very obscure and extreme ideology, that would otherwise be ignored by the majority, but ends up being picked up by the large-language models as they go scraping the Internet looking for data to learn from.

  12. This guy is kind of a dope. Taking a mass of information and then using it to figure stuff out is pretty much exactly what humans do too.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.