The New York Times had an article last week on an AI chatbot named Sydney and called it weird and creepy. Sydney is "acting unhinged."
Today it is Businessinsider.com and the author, Emily Senkosky is "terrified." Last year she was a writer for a company that was licensed to use the GPT-3 - its latest version is ChatGPT. Her task in 2021 was to write an article in which she wrote a paragraph and the AI wrote a paragraph. People were to guess which was which.
Fast forward to the new version, she thought she'd assess where ChatGPT is now. And what terrified her? "It was able to draw conclusions from the draft and write, dare I say, original ideas based on it, also using tactics for writing well, such as varying the length of the sentences. It even used some humanlike phrases to describe things, and most impressively, it was able to intuitively explain the limitations of its own abilities."
So could we expect more creativity in the regurgitation of articles in the future? That's what I experience across the internet when researching a topic or story. Today one can read the same paragraphs across all the publications that have reported on the story. Lots of times the same mistakes get repeated - mistakes of fact, grammar and spelling. That would be a benefit if the errors were corrected.
What about this headline? Russian hackers are trying to break into ChatGPT. And this one: Fraudsters are using machine learning to help write scam emails in different languages. And this: Business email compromise (BEC) gangs who pose as your boss, colleague or supplier and request urgent or important financial transfers to be made. One such scamming group is named Midnight Hedgehog and uses executive impersonation to deceive recipients into making payments for bogus services.
But then I use Flexify to create abstracts like this one out of pictures like the second one and think that's ok and even fun. |