What is Trust? A Review of Trust Definitions in Autonomous Vehicle Research

I gave a talk on trust conceptualization in automations and autonomous ground vehicles at the Social Responsiblity of Algorithms conference 2022. In this talk, I discussed trust in automation, and how trust in automations research is informed by trust in human-human relationships. I presented the values associated to trust in automations and the factors that influences human trust in automation from literature. I briefly presented the result of my scoping literature review on trust factors in autonomous ground vehicles research from African countries. I highlighted the need to explore trust in autonomous vehicles from diverse societal contexts using systems perspectives, and ended the talk with some questions for the audience - who in their various job roles influence automations design - to ponder on.

Social Responsibility of Algorithms Conference Podcast

I and my PhD cohort (Ned Cooper, Lorenn Ruster, and Amir Asadi) at the School of Cybernetics created a podcast episode on the Algorithmic Futures podcast as part of our pre-conference contribution for the case-study session organized by us, to be held at the Social Responsiblity of Algorithms conference 2022. In the episode, we explored the use of facial recognition for home-based quarantine during the pandemic, how facial recognition may transition after the pandemic, and the potential impact of this using the Multi-Level Perspective as an analytical framework. I and Lorenn narrated our findings from the interview of diverse experts including Lizzie O'Shea, founder of Digital Rights Watch; Angela Webster, Clinical Epidemiologist, Nephrologist and Transplant Physician; Diego Silva, Senior Lecturer in Bioethics, Sydney School of Public Health, at the University of Sydney; Peter Wells, Professor of Business and Sustainability at Cardiff University; Gavin Smith, Associate Professor in the School of Sociology at the Australian National University, and Mark Andrejevic, a Professor in the School of Media, Film, and Journalism at Monash University.

Operationalizing trustworthy AI in Industry (2021)

As a Master's student at the 3AInstitute, I interned at Castlepoint Systems as an AI Ethics Intern where I was tasked to help develop a Trustworthy AI framework for the scaling of Castlepoint Systems globally. To develop the trustworthy AI framework, I conducted a system analysis, and then, prototyped some of the system analysis results for validation and feedback. The system analysis involved of an analysis of Castlepoint’s culture, policies, and work practices. I surveyed published trustworthy AI and ethical AI frameworks using document analysis, conducted interviews, and ethnographic research at Castlepoint. I also did a case study on the implications of AI systems usage at scale, and then, conducted Castlepoint Systems stakeholder analysis and values elicitation by organizing a judgment call workshop with Castlepoint employees using a Value Sensitive Design approach. As analysis outputs, I proposed an ethical software engineering procedure to Castlepoint and defined Castlepoint's trustworthy AI principles. In addition, I created trustworthy AI policy and procedure in collaboration with Castlepoint's CEO to operationalize these defined values. Afterwards, I implemented some of the recommendations – on transparency, informed consent and data privacy policy using the ethical software engineering procedure I proposed to Castlepoint.

Before implementing the recommendations, the findings of the system analysis were reviewed to identify which of them can be implemented into Castlepoint software. Need, Goal, and Objectives (NGO) was performed on the recommendations to evaluate and communicate their need, goal, and objectives to the Castlepoint staff, in combination with a feasibility study to help identify which of the recommendations are feasible given the available skills, time, and resources, and to assess the prerequisites for implementing them. Afterwards, I condcuted requirement discovery, classification, and organization, followed by , requirements prioritization and negotiation, and then, requirements specification. Then, the system’s (prototype's) components and their relationships were defined, the system’s architectural design, database schema, UI design, and algorithm and system models were created as abstractions of the system based on the specified requirements. The specified systems requirements, algorithms and designs were implemented in Vue Js to create the prototype app. The prototype was written in HTML, CSS, JavaScript, jQuery, and Vue.js. I used vue-router for the frontend, and used Node Js, json-server, and Axios for the backend. The prototype application was demoed to the Castlepoint staff to demonstrate how the recommendations on consent and transparent privacy policy may be implemented in Castlepoint, and to facilitate discussions about its relevance and the concerns, questions and system requirements implementing it in Castlepoint raises.

Towards vision 2050 in Nigeria using Futures thinking and Systems thinking (2021)

I participated in the 2021 Africa Futures Leadership Series by School of International Futures in collaboration with Omidyar Network where I collaborated with young African futurists, advocates, inventors, and change-makers across different sectors to reimagine and shape the continent’s future using foresight, storytelling, and African indigenous problem-solving techniques. We had provocative conversations on the future of data governance in Africa, envisioned inclusive and equitable digital futures and discussed how AI or data-driven technologies can be conceptualised, developed, and used by communities across the continent for their benefits and welfare.

Some of the futures thinking tools we used includes the “200-year present”, the Verge Framework, the impact wheels, the Causal Layered Analysis, identifying and mapping signals of change, futures triangle, bubble grid, storytelling, and creating artefacts of the future. The Towards Vision 2050 story is an output of the activitives at individual and group level.

Aviary - A Bird Image Classifier and A Time Machine (Bird Sounds Capsule) (2020)

The Aviary is a virtual sound experience that replays the sounds of birds as they were captured on cameras in deployed parks - basically, a park sounds time machine. It was created as part of a collaborative project at the 3A Institute, we wanted to create a cyber-physical system that would serve as an artwork, while aiding and documenting in the recovery efforts of national parks in and around Canberra, including Namadgi and Mulligan’s Flat. The Aviary is a system of 2 sub-systems, the Machine Learning Birds Classifier, Sorter and Logging System and the Aviary Sound Capsule and Calendar interface.

The calendar interface enables users to click on any date and hear the sound of birds replayed at the precise time those birds were spotted by cameras. I built the interface with Node Js, HTML, CSS, JavaScript. The machine learning sub-system automatically take the captured images from the park camera as input, uses computer vision model to recognize and classify the images by the bird names, and sorts the high-risk and most-common birds into folders, while separating unidentifiable birds from pictures where no birds are present. It also automatically logs the image capture timestamp and meta-data in a MongoDB database from which the calendar interface fetches the information about the birds that was captured at the date and time the user has chosen from the interface to listen to. I developed the Birds Classifier, Sorter, and Logging system was developed with Python, Raspberry Pi, and MongoDB. To train the Birds Classfier model, I did transfer learning on a Google Coral Edge TPU and TensorFlow lite example bird model. The Calendar interface codes are provided here.
A detailed description of the project and its background is provided by Eryk - a team member - in his blog post.


Instructions for Aviary Use

The Aviary calendar interface is hosted at https://aviary-app.herokuapp.com/. Ensure audio autoplay is enabled on your browser before loading the page. If it is disabled, this prevent the birds sounds from being audible.
You need to select a date for the Calender to start working with the selected date and current time. That is, it plays the sounds of the birds captured in the park in the selected date and at the current time of listening. Or else, it will only play the background ambient sound.

3AI Master's Learning Portfolio (2020)

Towards becoming a better software engineer, I enrolled in the 3AInstitute's Master of Applied Cybernetics at the Australian National University. During my Master program at the 3AI, I engaged in analyzing and interrogating cyber-physical and AI systems through conversations and projects using cybernetics and systems thinking approaches. This portfolio captures key learning moments in my journey and my progress as a cybernetics student. A few of the projects I worked on as part of my courseworks both individually and in a team are highlighted in the portfolio.

Astro – A saving and loan management application for cooperative societies (2019)

I worked with my team at Multiskills Nig. Ltd as a full-stack web developer to develop a saving and loan management application to manage a cooperative society’s transactions where they can save money, liquidate and boost their saving, request for loans, and repay loans using ASP.NET CORE 2.0, C#, SQL, Git, JavaScript, jQuery, Visual Studio, Microsoft SQL Server, Azure DevOps. I designed and maintained the systems’ database and model, worked on major functionalities of the app like creating and onboarding members, members’ savings transaction, tracking members’ queries, activating loan roll-over, making the user’s dashboard dynamic depending on their roles, and making the app responsive.

Mechanism Design for Matching in Nigeria’s National Youth Service Corps: A Case Study (2019)

The Nigeria’s National Youth Service Corps is a one-year national service scheme where graduates are matched to jobs and states within the country. The scheme records a very low performance in matching graduates to both states and zones, hence these graduates rematch themselves using incentives, making the system unstable. I worked with mechanism design researchers in Nigeria and the United States of America (U.S.A) to build a fair and stable matching mechanism for the scheme. We conducted interviews, a survey, survey data analysis, problem modeling, etc. I presented the research at the mechanism design for social good workshop in ACM Conference on Economics and Computation, 2019 and at the ACM Compass, 2019.

Using Machine Learning to Improve Farm Produce in Nepal (2019)

I volunteered as a Data tagger at Omdena to build a model for crop classification in Nepal, India using satellite images. The model recognizes where crop foods like rice and wheat are planted, track, and help improve their growth in order to help the UN in fighting hunger in India. I labeled and prepared satellite images data using Labelbox and Google earth engine. More details on the project can be found here

Gang Fight Prevention using Machine Learning Models (2019)

I volunteered as a Machine Learning Engineer at Omdena to build a model that identifies potential gang fights or violence in Chicago and prevent it using live tweets, without profiling any twitter user. I contributed by labeling tweets in Microsoft Excel sheet. I scrapped and analyzed Chicago’s gangs’ conversations, studied gang behaviors on social media (i.e. gang terms, narcotics Terms, Corporate Security). More details on the project can be found here

CodeEazee - A Programming teaching tool (2017)

In this project, I conducted a survey, literature review and interviews to determine the factors responsible for the reduced interests of non-computer science students in programming in Nigerian tertiary institutions. I reviewed the various approaches used in teaching programming, and developed a web platform teaching system that teach programming skills like skills, computational thinking, algorithms’ design, programming in general and Python programming specifically to students. The platform, named CodeEazee, is a problem solving, self-teaching tool focused on teaching learning programming rather than programming languages. It uses templates and gamification, and embeds a 3rd party python interpreter to support offline programming learning. I developed CodeEazee as a final year project using HTML, CSS, Bootstrap, PHP, JavaScript, jQuery, MySQL, Git, IPython. The project’s research paper can be found here, it was presented at SciPy2018, USA.

FulangS - A Capstone Scripting Tool (2017)

I worked with my course mates during my undergraduate program to develop a quasi, general-purpose scripting language to make learning of programming easy for beginners. We developed FULangS with C programming language, Flex, Bison, and Cygwin. FULangS scripts interprets to a virtual machine and offers special feature support for stack machines and garbage collection. FULangS interprets and compiles codes. I developed the FULangS interpreter. The project’s research paper can be found here