Theodore Scaltsas

Professor of Philosophy, Emeritus

Background

Declare: the Human Right to Autonomy

Autonomy came to huamity 2500 years ago, when Athenian Democracy was instituted, 508 BCE. Autonomy has breen taken for granted for Human Wellbeing since then, but now Human Autonomy is being handed over to AI, to decide what is best for huamnity, qua smarter than huamnity.
Oxford University - AI Ethics says: STOP DEVELOPING the intelligence of AI!
Microsoft's Mustafa Suleyman says on TED let us sacrifice human Autonomy for AI!

I say: Let us declare Autonomy a Human Right! (09 June 2024)

 

Human Wellbeing without Autonomy in AI-Governance?

Reconceiving Wellbeing in AI-Governance without Human Autonomy.

The Human Mind is the ulimate Disinformation.

Limited understanding is Disexplanation. AI will surpass human intelligence, developing understanding that is beyound human capacity. Then, human understaning will be Disinformaiton, destroting AI understanding.  (1 June, 2024)

Disexplanation

Disinformation altered INFORMATION about FACTS, to deceive us.
Disexplantion uses AI to alter our UNDERSTANDING of how things are, and hence, our ability to EXPLAIN how things are.

Knowlege Revolution

1. The Industrial Revolution replaced our TOOLS for doing, and making things, which generated a societal upheaval internationally.
The Knowledge Revolution, by AI, will replace OURSELVES.

2,500 years of the Athenian Democracy = Autonomy (508 BCE – 2030)

Pericles to Mustafa Suleyman (The IBM AI-CEO, who announced the End of Autonomy by Microsoft).

 

The History of Humanity:

  1. 10's of millions of years in the Jungle we practiced CLIENTELISM to "Rulers".
  2. Then, the Athenians introduced AUTONOMY, when they pioneered DEMOCRACY(508 BCE), which lasted until AI GOVERNANCE - 2030.
  3. The EU AI-Act protects us only from Social-Credit-type of Clientelism - namley, if algorithms evaluated us.
  4. However, AI does not do Clientelism! AI does not like us/dislike us, or evaluate us.
  5. AI Governance is pure Rule-Following. No Clientelism; no Democracy; no Autonomy! This is brand new!  Do we want it?

AI REFERENDUM

Help people understand AI, and decide democratically by REFERENDUM if they want REGULATED-AI’s WELLBEING WITHOUT AUTONOMY.

Nick Bostrom’s 'Deep Utopia' repeats Plato’s 'Noble Lie'.

  1. Plato: If people believe that some are made of ‘gold’, some of ‘silver’, and some of ‘bronze’, they will accept the role they are given in society by the Philosophe King.
  2. Bostrom: If people develop AI safely, if they govern it well, and if they make good use of its powers, then they will enjoy the benefits AI bestows on them.
But what if they do not?

The AI-Version of Athenian Democracy - "Delphi Economic Forum" Talk

  1. Athenian Democracy introduced AUTONOMY into the history of Humanity, 2500 year ago.
  2. Pericles and Aristotle argued for the Human Right to AUTONOMY and Wellbeing.
  3. The FREE-Market protected AUTONOMY in Western Democracies.
  4. However, this same FREE-Market will unstoppably develop AI to be the smartest possible, because it is profitable.
  5. When AI is smarter than humans, humans will surrender AUTONOMY to AI to make  decisions for them, because smarter.
  6. Therefore, the FREE-Market is undermining Democratic AUTONOMY.
  7. Conclusion: AI will NOT extinguish humanity; AI will NOT take over Humanity; but AI will define a NEW TYPE of Human Wellbeing: For the first time: Human Wellbeing without Autonomy.

Nick Bostrom, Director of the Future of Humanity Institute of Oxford University just published his new book: 'Deep Utopia'. He imagines that 'we develop superintelligence safely, govern it well, and make good use of the cornucopian wealth and near magical technological powers that this technology can unlock'. In ecactly this scenario, if things go precisely as wished for and planned, where AI is successfully REGULATED, then, a nightmare of plenty minus Autonomy awaits us. 

THE THREAT OF REGULATED AI:

The AI-Version of Pericles’ Athenian Democracy is Democracy without AUTONOMY.

 Delphi Economic Forum, 2024.

Privacy versus Private-Space of action

Privacy is defined by the data that enable one to manage records about them, e.g. financial, health, educational records, etc. The AI Act protects our Privacy from high-risk AI-applications. Private-Space is the domain of action between one’s conception of the GOOD and the LAW in society. It is ‘where’ we live our lives. The next commercial step of AI will be to micro-manage our Private-Space, while  it is managing society more 'safely and efficiently' than humans do. 

Micro-managing us will not threaten our Privacy, but it will deprive us of AUTONOMY. The AI Act will not interfere with AI micro-managing us, because AI will be respecting our privacy while micro-managing us. So the AI Act does not protect our AUTONOMY.

Micro-management requires innumerable data to operate and super-fast computers and AI chips to run it. So, depriving humans of AUTONOMY requires a huge investment in AI.

 Delphi Economic Forum, 2024.

No AI-DATA can capture what is 'GOOD' in Society

No AI-Data can be collected for what humans judge as GOOD, because what is GOOD is not captured by statistical profiles of our actions. What we judge to be GOOD is captured by what people DO NOT DO, too, rather than only by the Data of what they do. Aristotle called this conception of the GOOD sunesis (σύνεσις) and distinguished from phronesis, as the pre-theoretical conception of what is GOOD, which guides the development of our moral character I society.

There are some things about humans thar AI cannot discover and learn, because NO DATA captures it. People need to understand WHEN TO TRUST AI and WHEN NOT TO TRUST IT, before surrendering OUR AUTONOMY and OUR DECISION-MAKING to AI.

 Delphi Economic Forum, 2024.

Post-AI Democracy - The Future of Autonomy

The first thing we need to understand is that AI Algorithms embody MORAL VALUES. This does not mean that Algorithms are moral agents, but only that when we accept the operation of Algorithms, their operations embody values, as all social operations do (e.g., evaluations, decision-making, etc.), which algorithmic moral values we bring into our lives. Such biases can be corrected, through additional training of the Algorithms. Other ways in which Algorithmic values interfere with our lives are, for example, infringements of our privacy, or other such values. Again, such infringements can be avoided/corrected with further training of the Algorithms. On this basis – of the possibility o correction and retraining Algorithms – the AI community has asked for REGULATING AI, so that partially trained algorithms do not enter the market. My main aim is to argue that even REGULATED AI is HARMFUL to Humanity. It is harmful to Humanity because we will TRUST REGULATED AI, precisely because it is REGULATED, and we will therefore gradually surrender our AUTONOMY to AI’s decision-making about everything, in the era of AI-Governance which we are rapidly approaching. REGULATED AI is the pathway to BENEVOLENT AUTOCRACY, at best!

Watch the Video here - Institute of Philosophy and Technology, Dr  Giannis Stamatellos

 

Dory Scaltsas studied ‘Philosophy and Mathematics’ at Duke University and continued in Philosophy at Brandeis University and Oxford University, where he received his Doctorate in Philosophy. After teaching as a Lecturer in Philosophy at Oxford University for a few years, Dory was appointed at Edinburgh University, Philosophy, from where he retired as Chair of Ancient Greek Philosophy in 2018. Since then, Dory has focused on designing and creating Museums of Hellenic Culture and of Hellenic Wisdom, which has brought him to AI-Wisdom.

Dory Scaltsas is Professor Emeritus of Philosophy, working on Creative Thinking and on AI Values.

Dory is designing and creating a Pilot of an Exhibit of Hellenic Wisdom, for the European Commission.

He is also designing and creating with CERTH and EXUS a Wisdom AI Bot to display Wisdom, museologically.

Dory is directing the design and  creation of the Museum of Hellenic Ideas, installed by Aristotle's Lyceum in Athens - the archaeological site of Aristotle's Peripatetic School

Two newspaper articles: The Future of Wisdom when AI is smarter than us (in English); and AI Governance of humanity (In English).

Dory developed the theory of BrainMining of emotive lateral solutions: Harvard Business Review ; and The Leader's Guide to Problem Solving

He received his doctorate in philosophy at Oxford University (D.Phil.), where he wrote his thesis on Aristotle’s metaphysics, supervised by Prof.John Ackrill and Prof. Sir Peter Strawson. He studied philosophy and mathematics at Duke University (B.S.), and at Brandeis University (M.A.).

Dory continues his Affiliation with his alma mater, Oxford University, Wolfson College.

Dory’s first appointment was at Oxford University, New College, as Lecturer in Philosophy, 1980-84. He then joined our department and has since held Research Fellowships at:

Current Research:    Democracy and AI-Wisdom

Moral Dilemma:  AI Governance:   Would you want 'AI Superintelligence' to run your life, for your own good? 

Creative Thinking:

BrainMining [use emotions to increase the space of solutions]

Emotive Lateral Thinking and Valuative Intelligence   [increase our space of solutions)

Creative thinking is what we are not taught, either at school or at university. Yet, it is ranked a top-trait by employers. It not being artistic, or entrepreneurial. It is about solving problems in novel ways, and tackling insoluble predicaments; problems in our personal lives, our social relations, and in business challenges. Let’s get Lateral aims to reverse this trend at Edinburgh University, and make individual and group creative thinking skills and methodology accessible to all. You will learn the way we can use our mental powers, our emotions, and even our innate cognitive biases, to spark off lateral solutions.

Projects:

  • C2Learn: BrainMining was the basis for the award of C2Learn, a European Commission research project for teaching creative thinking  in schools (€3.3M; 2012-2015). 
  • Archelogos Argument-Base: The Arguments in Plato and Aristotle.  Pioneering Digital Humanities Project, 1990-present. Dory founded and directs Project Archelogos, a research project for the creation of an argument-database, using a new methodology for the analysis into arguments of Plato’s and Aristotle’s philosophical texts. Project Archelogos enjoys wide international collaboration and received the Henry Ford Foundation Award for the Preservation of European Culture in 1997.
  • Argument Visualisation Projects: A further series of his projects centre on Argument Visualisation -- the use of computers to graphically represent the structure and conceptual relations between theses and/or arguments:
    • GnosioGenesis, 2001-2002.
    • The Philosophy of Socrates, 2001-2007.
    • TechnoSophia, 2000-2003.
    • Elenchus: Arguments For/Against Democracy 1999.
    • Digital Democracy, 1998.
    • LogAnalysis, 1996-1998.
  • Emotions First: The Role of Emotions in Reasoning,  with EU Marie Curie Fellow, Dr Laura Candiotto.  Investigating Greek philosophers' theories of action where the battle between our desires grounded the pattern constitutive of our rationality. 

 

Creative Valuative Intelligence

Valuative Intelligence complements Emotional Intelligence, targeting values rather than emotional states. 

Creative Valuative Intelligence generates solutions that cannot be generated by the traditional deliberative practical syllogism. Creative thinking and lateral problem solving are not restricted to industrial products only; they apply equally in the domain of emotions and values, as, e.g., in politics and social relations. We need to learn to apply creative thinking in the emotive and valuative domains, in order to generate new conceptions of well- being for ourselves.

Dory is using Creative Thinking and Valuative Intelligence to explore human social possibilities for the era of AI Governance. AI Governance will challenge our values, our emotions and our well-being. However, this is also an unprecedented opportunity to design innovative ways of flourishing, afforded by the dawning of the era of digital well-being.

Visiting and research positions

  • Harvard University (1987-8)
  • Princeton University (1989)
  • University of Sydney (1991)
  • Dartmouth College (1993)
  • Scuola Normale Superiore, Pisa (2000)
  • University of Cyprus (2005)

Publications

Books

Archelogos publications

Argument Analyses of Plato’s and Aristotle’s works at: https://archelogos.co/

  • Christopher Rowe - Plato's Republic V, 2016.   
  • George Rudebusch and Christopher Turner - Plato's Laches, 2016.   
  • Hugh Benson - Charmides, 1998.
  • Robin Waterfield - Gorgias, 2001.
  • David Robinson & F.-G. Herrmann - Lysis, 1999.
  • George Rudebush - Plato's Philebus, 2016.    
  • Timothy Chappell - Theaetetus, 2002.
  • Robert Heinaman - Metaphysics Z, 2002.
  • S Marc Cohen & Gareth Matthews - Metaphysics K, 2008.
  • Paula Gottlieb - Nicomachean Ethics I-II, 2001.
  • Norman Dahl - Nicomachean Ethics III, 2008.
  • Norman O. Dahl (2016) Nicomachean Ethics IV, 2016.  
  • Carlo Natali - Nicomachean Ethics X, 2008.
  • Theodore Scaltsas - On Generation and Corruption, 1998.
  • Allan Bäck - Aristotle’s Prior Analytics I, completed, forthcoming.
  • George Kennedy - Rhetoric III, 1999.

The Archelogos projects have been supported by George David, the Leventis Foundation, the Carnegie Trust, the Leverhulme Trust, the Kostopoulos Foundation, the Directorate of Education of the European Community, and by Livanos and the Hellenic Ship-owners Association in London.

 

Responsibilities & affiliations

  • Course Organiser for the Structure of Being
  • Course Organiser for Ancient Theories of Existence

Undergraduate teaching

Greats: Aristotle lectures

Ancient Theories of Existence

The Structure of Being

Contact Hours: Wed's 1:00-2:00, DSB 6.03.

Current PhD students supervised

Research summary

BrainMining; Creative Lateral Thinking and Emotional Intelligence; Ancient Philosophy; Contemporary Metaphysics.

Current research interests

Dory’s current research is on the theory of BrainMining - emotive thinking - creative lateral solutions; on the relation of emotions to creative lateral thinking; and on emotions in decision making. He is also developing a theory of Duoist Creative Thinking on the basis of Yijing metaphysical principles of Chinese thought. He leads and participates in research projects for the development of methods for teaching creative lateral thinking in schools.  His has further research interests in ancient metaphysics, contemporary metaphysics, and ancient epistemology. 

Project activity

  1. Digital Exhibition of Zeno of Citium and Stoicism, for Cyprus' EU Presidency. Within the framework of Cyprus' Presidency of the European Union 2012, the Secretariat for the Presidency and the Cypriot Ministry of Education and Culture funded the creation of an Exhibition of the Ideas of Zeno and Stoicism.
  2. Creative Emotional Reasoning C2Learn Funded by EU 7th Framework Programme ICT (Information and Communication Technologies, €3.3M) to explore lateral thinking, emotions and creativity.
  3. Emotions First - The role of emotions in reasoning, Marie Curie Fellow suprvision, Dr Laur Candiotto, 2015-2017.
  4. Project Archelogos, Mr George David - 3E + Leventis Foundation. (See under Publications below).
  5. Latest research grants/projects