HumaniFAI: Shaping the Future of AI with Human Factors

At HumaniFAI, we specialise in guiding industries through the integration of Artificial Intelligence by placing humans at the core of technology design and deployment.

Introducing the HERA Principles

Focusing on user autonomy, inclusivity, transparency, and meaningful control to enhance user trust and adoption.

Embedding principles of fairness, beneficence, and non-maleficence throughout AI development, ensuring equitable and unbiased outcomes.

Establishing accountability, robust data governance, and strict adherence to regulatory frameworks to maintain organisational integrity.

Prioritising robustness, security, reliability, and auditability to safeguard against failures and adversarial threats.

Transforming how industry approach the integration of AI with Human Factors Expertise

Discover how our insights can support research in AI initiatives across industry.

Enhance user experience through ethical design.

Leverage research for responsible AI solutions.

Strengthen trust through transparency and accountability.

Foster innovation by aligning AI with human values.

Safeguard against bias with inclusive AI strategies.

Build robust AI systems to withstand real-world challenges.

Secure your AI solutions with rigorous auditing and verification..

What We Offer

AI Consultancy & Assurance

  • Expert assessments of AI usability and reliability.
  • Guidance to help industry comply with AI regulations and ethical standards.
  • Transparency, fairness, and bias-free AI systems.

Human-Centred AI Research & Development

  • Analysing interactions between humans and AI technologies.
  • Developing AI systems that prioritise human needs, autonomy, and user experience.
  • Conducting usability and reliability assessments to ensure optimal integration.

AI Integration & Optimisation

  • Streamlining the adoption of AI into processes.
  • Designing “human-in-the-loop” systems for collaborative human-AI workflows.
  • Enhancing decision-making through clearer, more intuitive AI interfaces.

Industries We Work With

HumaniFAI provides expert Human Factors research and consultancy across a range of industries, with specialised expertise in:

  • Defence: Ensuring ethical compliance, operational reliability, and enhanced human oversight in critical military applications.
  • Healthcare: Developing transparent, user-centred AI systems to improve patient outcomes, clinician trust, and healthcare efficiency.
  • Cybersecurity: Designing robust, intuitive AI-driven solutions to strengthen threat detection, analyst response, and security assurance.
  • Transport: Optimising autonomous vehicle systems and human-machine interfaces (HMI) for enhanced safety, usability, and seamless human-AI collaboration.
  • Critical National Infrastructure: Enhancing resilience and reliability of essential services through human-centred AI, improving operational safety, efficiency, and decision-making under pressure.

While our expertise is especially strong in these sectors, our human-centred approach is versatile and applicable across diverse industries aiming for responsible and effective AI integration.

The People behind the scene

Meet our team

A collective of innovative minds and spirited individuals, committed to bringing their best to support an ever changing landscape of AI.

Phillip Morgan – Professor of Human Factors and Cognitive Science

Director

Prof Phillip Morgan’s PhD (2001-2005) was focussed on the effects of task interruptions on memory and problem solving with multiple applications, including the design of human-machine interfaces / dashboards within command and control centres and transportation.

Prof Phillip Morgan is an international expert with ~25-years’ experience in sociotechnical aspects of AI and automation, trust in disruptive technologies, Cyberpsychology, transportation, HCI, interruption and distraction effects, and adaptive cognition and has published extensively (>130 outputs) across these areas with >50 grants (~£40million, e.g. Airbus, CREST, ERDF, ESRC, EPSRC, HSSRC, IUK, NCSC, SOS Alarm, Wellcome).

Prof Phillip Morgan was a Human Factors lead on the IUK (~£5m, 2015-18) Venturer Autonomous Vehicles for UK Roads project; Co-I and Human Factors lead on the IUK (~£5.5m, 2016-19) Flourish Connected Autonomous Vehicles project; and UK-PI on a JST-ESRC (2020-2023, with collaborators at the universities of Kyoto, Osaka and Doshisha) project Rule of Law in the Age of AI: Distributive Liability for Multi-Agent Societies. He Co-Leads a Human-Centred Design theme within an EPSRC (~£12m, 2024-2029) AI for Collective Intelligence (AI4CI) hub (https://ai4ci.ac.uk/) – focussing on the design smart AI agents for optimal outcomes (e.g. trust, acceptance, adoption, continued use) and to achieve positive behaviour change, at scale.

Prof Phillip Morgan has delivered over 100 talks internationally on sociotechnical aspects of AI, automation, cyber security and robotics. These have included many invited Keynote sessions (with academic, industry, third sector and Government audiences) in countries including Australia, Japan, New Zealand, and Sweden.

Prof Phillip Morgan holds a Personal Chair within the School of Psychology at Cardiff University. He is Director of the Cardiff University Human Factors Excellence (HuFEx) Group; Director of Research within the Centre for AI, Robotics, and Human-Machine Systems (IROHMS); Transportation Lead within the Digital Transformation Innovation institute (DTII). He is Director of the Airbus – Cardiff University Academic Centre of Excellence in Human-Centric Cyber Security (H2CS) and also Co-Directs a Strategic Partnership between Airbus and Cardiff University. He is Visiting Professor at Luleå University of Technology, Sweden, and Distinguished Visiting Professor at the University of Canberra, Austral

Dr Craig Williams – Principle Human Factors Scientist

Director

Dr Craig Williams’ PhD (2018-2023) focused on novel interventions adapting interfaces to induce behaviour change and mitigate the negative effects of interruptions and distractions within safety critical healthcare settings. This included the development of an ecologically similar interface / secondary task and utilisation of human-computer interaction principles to implicitly encourage better cognitive strategies that protect against the negative effects of task interruptions.

Dr Craig Williams has led multidisciplinary research initiatives focused on Human Factors and human-machine interaction across domains including autonomous systems, cybersecurity, Defence, and transport. His work integrates immersive driving simulators, rigorous experimental methodologies, and advanced statistical analyses to understand critical aspects such as trust dynamics, behavioural adaptation, and decision-making in high-stakes environments. The insights gained from his research have directly influenced design enhancements and operational frameworks, significantly improving safety, usability, and system reliability.

He has supported pivotal autonomous vehicle projects such as Venturer and Flourish, conducting targeted studies into driver trust, acceptance, and interface usability, specifically addressing older users’ needs and preferences. His human factors investigations span safety-critical systems, from emergency call centres and military training platforms to cybersecurity operations, utilising cognitive work analysis frameworks to mitigate human error, reduce cognitive load, and improve overall system effectiveness.

Dr Williams has also led innovative experimental frameworks, which have successfully improved training and situational awareness for military and cybersecurity personnel through adaptive scenario design. He contributed human factor insights to a validated trust-assessment frameworks for human-autonomy teaming (HAT), including the Trust in AI (TiA²) Framework, which prioritises ethical design principles, adaptive interfaces, and operator-centric training.

His extensive research portfolio also includes behavioural studies designed to mitigate cognitive biases and optimise human decision-making in complex scenarios. Dr Williams applies behavioural change techniques and psychometric evaluations to understand and enhance user interactions with automation and emotionally demanding tasks. Additionally, his advanced statistical modelling expertise has supported data-driven projects, including national pandemic forecasting and Defence evaluations, employing cutting-edge analytical tools to ensure optimal human-system integration and performance.