Artificial Intelligence, Machine Learning & Robotics (Edexcel GCSE Computer Science)

Revision Note

Robert Hampton

Expertise

Computer Science Content Creator

What is artificial intelligence?

  • Artificial intelligence (AI) is a machine that can display intelligent behaviours similar to that of a human

  • AI is a system that can:

    • Learn - acquire new information

    • Decide - analyse and make choices

    • Act autonomously - take actions without human input

What is machine learning?

  • Machine learning is one method that can help to achieve an artificial intelligence (AI)

  • By giving a machine data so that it can 'learn over time' it helps towards training a machine or software to perform a task and improve its accuracy and efficiency

What is robotics?

  • Robotics is the principle of a robot carrying out a task by following a precise set of programmed instructions

  • Robots can be categorised into two groups:

Dumb robots

Smart robots

Repeat the same programmed instructions over and over again (no AI)

Carries out more complex tasks and can adapt and learn (AI)

E.g. Car assembly line

E.g. Assisting surgeons in delicate procedures

  • The development of artificial intelligence, including the increased use of machine learning and robotics raises ethical and legal issues such as:

    • Accountability

    • Safety

    • Algorithmic bias

    • Legal liability

Accountability & Safety

Why is accountability & safety an issue?

  • Accountability can be an ethical issue when the use of AI leads to a negative outcome

  • Safety can be an ethical issue when you try to ensure safety in an algorithm that is designed to make it's own choices, learn and adapt

  • The choices made by AI will have consequences, who is held accountable when things go wrong?

Driverless car accident

Scenario

Ethical issues

As a passenger in a driverless car, the car suddenly swerves to miss a child in the road and kills a pedestrian walking on the pavement

  • Who takes accountability for a driverless car?

  • Should AI be programmed to prioritise the passengers safety or the safety of pedestrians?

  • What rules should be programmed to ensure safety?

  • What happens when danger is unavoidable?

Why is algorithmic bias an issue?

  • Algorithmic bias can be an ethical issue when AI has to make a decision that favours one group over another

  • If data used in the design of AI is based on real-world biases then the AI will reinforce those biases

  • If the programmer of the AI has personal biases they could make design decisions that reinforce their personal biases

Loan approvals

Scenario

Ethical issues

A bank introduces the use of AI to streamline loan approvals. Historical loan data is used and a client is denied based on historical loan approval rates in certain races or post codes

  • How is it fair to deny a person based on historical data?

  • Is the AI reinforcing biases?

  • Is the AI programmer aware of biases in historical data?

  • Who is responsible for biased outcomes?

  • Legal liability is an issue in all aspects of AI, but particularly when the use of AI leads to the loss of human life or criminal activity

  • In the eyes of the law, who is responsible?

    • The programmer?

    • The manufacturer?

    • The consumer?

Smart toy

Scenario

Legal issues

A person buys a smart toy designed to interact with a child and personalise the play experience, learning their preferences etc.

A hacker gains access to the smart toy stealing personal data

  • Can the programmer who wrote the code be sued?

  • Can the manufacturer of the toy be sued?

  • Have privacy laws been violated by the manufacturer?

  • Does the smart toy need to be recalled?

Worked Example

A hospital uses an algorithm to help decide how many nurses are needed on each day

Discuss how algorithmic bias can affect the decision the hospital makes [6]

Your answer should consider:

  • the cause of algorithmic bias

  • the impact on induvial and communities of algorithmic bias

  • the methods available to reduce the risk of algorithmic bias

Answer

Causes of algorithmic bias

Algorithms being trained used historical data - past scheduling practices not fair. the algorithm would continue the bias

Algorithm design focussed on efficiency over fairness - filling shifts without considering experience

Lack of transparency - hard to check and fix any potential bias

Impacts of algorithmic bias on individuals and communities

Nurse safety - unfair scheduling could lead to nurse burnout, leading to medical errors

Unequal scheduling - bias could lead to groups of nurses being assigned more shifts than others or regularly assigned undesirable hours

Patient care - short staffing compromising patient care

Methods to reduce algorithmic bias

Human oversight - algorithmic recommendations should be reviewed and adjusted by human schedulers first

Transparency - nurses and all employees should understand how the algorithm is making decisions so that concerns can be raised if needed

Auditing - regular audits to identify and address any emerging bias in the algorithms output

You've read 0 of your 0 free revision notes

Get unlimited access

to absolutely everything:

  • Downloadable PDFs
  • Unlimited Revision Notes
  • Topic Questions
  • Past Papers
  • Model Answers
  • Videos (Maths and Science)

Join the 100,000+ Students that ❤️ Save My Exams

the (exam) results speak for themselves:

Did this page help you?

Robert Hampton

Author: Robert Hampton

Rob has over 16 years' experience teaching Computer Science and ICT at KS3 & GCSE levels. Rob has demonstrated strong leadership as Head of Department since 2012 and previously supported teacher development as a Specialist Leader of Education, empowering departments to excel in Computer Science. Beyond his tech expertise, Robert embraces the virtual world as an avid gamer, conquering digital battlefields when he's not coding.