Tech Leaders take a stand to pause AI development saying ‘It’s a dangerous race that no one can predict or control’

Revelations 13:14 “…by the signs that it is allowed to work in the presence of the beast it deceives those who dwell on earth…”

Important Takeaways:

  • ‘It’s a dangerous race that no one can predict or control’: Elon Musk, Apple co-founder Steve Wozniak and 1,000 other tech leaders call for pause on AI development which poses a ‘profound risk to society and humanity’
  • Musk and 1,000 others signed the letter on The Future of Life Institute’s site [ including Steve Wozniak from Apple, Elon Musk, and Yuval Noah Harari]
  • They say the current race to develop AI is dangerous and unpredictable
  • Musk and the industry are calling for a six-month pause on development
  • They are asking all AI labs to stop developing the technology for at least six months while more risk assessment is done. If any labs refuse, they want governments to ‘step in’.
  • The letter also detailed potential risks to society and civilization by human-competitive AI systems in the form of economic and political disruptions, and called on developers to work with policymakers on governance and regulatory authorities.
  • In 2017, Musk warned that humanity was ‘summoning the demon’ in its pursuit of the technology, which he fears will be developed to such an extent that it will become self-sufficient and overpower mankind.
  • Tech Leaders Plea:
    • AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1] and acknowledged by top AI labs.
    • Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable….
    • Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium…
    • AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts

Read the original article by clicking here.