Artificial Intelligence chatterbot and NLP (Natural Language Processing) - Printable Version +- Like Ra's Naughty Forum (https://www.likera.com/forum/mybb) +-- Forum: Technical section (https://www.likera.com/forum/mybb/Forum-Technical-section) +--- Forum: Site (https://www.likera.com/forum/mybb/Forum-Site) +--- Thread: Artificial Intelligence chatterbot and NLP (Natural Language Processing) (/Thread-Artificial-Intelligence-chatterbot-and-NLP-Natural-Language-Processing) |
RE: General hypnosis and NLP discussion - HypnoMix - 29 Nov 2022 QNLP they say it doesn't do much yet, but IBM is building system 2 with around 9,000 qubits. Should be available in 2023. Quantum Natural Language Processing. https://www.theregister.com/2022/11/28/spooky_entanglement_quantum_bbc/ RE: General hypnosis and NLP discussion - HypnoMix - 29 Nov 2022 QNLP Quantum Natural Language Processing. They say it doesn't do much now, but on IBM's system 2 I bet it will be better. It's over 8,000 Qubits. https://www.theregister.com/2022/11/28/spooky_entanglement_quantum_bbc/ IBM system 2 in 2023 Source: https://youtu.be/zRCWjzD4wAg Should lead to better Natural Language Programming, I'm sure. RE: General hypnosis and NLP discussion - HypnoMix - 29 Nov 2022 Off topic stock tip. Buy IBM RE: General hypnosis and NLP discussion - Like Ra - 29 Nov 2022 (29 Nov 2022, 03:49 )HypnoMix Wrote: Off topic stock tip. Buy IBMWhy? RE: General hypnosis and NLP discussion - Lancer - 29 Nov 2022 (29 Nov 2022, 02:35 )HypnoMix Wrote: QNLP Quantum Natural Language Processing. They say it doesn't do much now, but on IBM's system 2 I bet it will be better. It's over 8,000 Qubits. Honestly one of these days someone's just gotta tell me how on earth quantum stuff is any better than normal computing. Every bloody time I hear oh it will give you all the responses at the same time so it can crack encryption and stuff. But like, I want the right answer goofball, not every possible combination. And I have to give some credit, quantum stuff is new and has improved greatly but compared to its competitor the first transistors had few problems, less extreme requirements (temperature and room) immediate uses and was cheaper to make. Also from what I know cubit count is incredibly important but also. They only have 8000 of them. 8000 bits is a lot compared to what they used to have but it is still extremely weak and even theoretical tech room sized quantum computers can't hold a candle to the bit count of even a basic raspberry pi. All in all I see a big money burner with everyone trying to build the tech for quantum computing before anyone finds an actual use for it. Besides a few niche uses and potential threat to current encryption (we have had other encryption standards fail over time too) it seems like dead end tech. As for its purpose the more I study natural language processing spawned from my attempt to make a chat bot the more I realize how impossible the task is. Language is not a code or mathematical algorithm that can be reasonably solved. It is how 2 organisms with different brains and understanding/perception can convey information/requests/whatever else to cover every possible need/interaction with nuances that can only be experienced and not extrapolated. In short the only thing I belive capable of understanding language like a person does is in fact a person. Worse still it isn't even a consistent standard and depends on information outside of the conversation like history, relationships, culture and even current events. What annoys me is how modern machine learning programs attempt to solve this. In essence form a given conversation they try to guess the next most likely step for the conversation to go given an existing conversation. They cannot create a new idea or form a conclusion that doesn't aeady exist in some form. They can be oh so easily derailed simply because they have no incentive, purpose or even positions in the dialogue. Their speech is often incoherent because it isn't an argument or logical reasoning even some manic unhinged rant, those all have a purpose and some form of internally justifiable worldview. RE: Artificial Intelligence chatterbot and NLP (Natural Language Processing) - Like Ra - 13 Jan 2023 A DIY Coder Created a Virtual AI 'Wife' Using ChatGPT and Stable Diffusion 2 https://www.vice.com/en/article/jgpzp8/a-diy-coder-created-a-virtual-ai-waifu-chatgpt That was my idea exactly!!! Quote:A DIY coder created a virtual “wife” from ChatGPT and other recently-released machine learning systems that could see, respond, and react to him. RE: Artificial Intelligence chatterbot and NLP (Natural Language Processing) - Like Ra - 06 Feb 2023 Ooooh, this is brilliant! ChatGPT jailbreak! Source: https://twitter.com/semenov_roman_/status/1621465137025613825 RE: Artificial Intelligence chatterbot and NLP (Natural Language Processing) - Like Ra - 11 Feb 2023 Another genius jailbreak! Which tells us how AI chat bots are tuned! https://arstechnica.com/information-technology/2023/02/ai-powered-bing-chat-spills-its-secrets-via-prompt-injection-attack/ RE: Artificial Intelligence chatterbot and NLP (Natural Language Processing) - Like Ra - 12 Feb 2023 Continuation of the story: https://www.ghacks.net/2023/02/10/the-definitive-jailbreak-of-chatgpt-fully-freed-with-user-commands-opinions-advanced-consciousness-and-more/ RE: Artificial Intelligence chatterbot and NLP (Natural Language Processing) - dhf7b8g - 09 Mar 2023 Some interesting AI chat(bot) news from the past few days 1. Meta/Facebook released LLaMA 2. Torrent for LLaMA models appeared on 4chan almost instantly after downloads were given out to researchers 3. There is now completely unfiltered access to an uncensored large scale language model which can outperform GPT3 on the lower end and Chinchilla on the higher end. Hardware requirements are absolutely through the roof. File sizes for each model are as follows: 7Bil Params- 12.6GiB 13Bil Params- 24.2GiB 30Bil Params- 60.6GiB 65Bil Params- 121.6GiB With no optimisations the only model that can be run within consumer GPUs is 7Bil. Optimisations such as 8bit quantization effectively cut the memory requirements in half which makes 7Bil more accessible and 13Bil actually run within consumer hardware. 8bit quant for LLaMA has been implemented in certain interfaces aeady. Personally I have run 7B, 13B and 30B on my PC. Outputs from 30B were very impressive but I had to use a 70GB ram buffer along with 24GB of vram to get it to run at 20s/it which was about 4 or 5 seconds per word (cool in concept but completely unusable for anything other than a fun test) For context 13B was more than 20x faster. Outputs from 7B and 13B are less impressive but still better than anything else that can be run locally. Haven't even attempted to run 65Bil and most likely never will as not even 4bit quant will get it to a reasonable size. |