Problem Book Projects
The advent of publicly available generative AI tools has the potential to fundamentally alter how education operates. There is a risk of unethical use of AI tools for summative assessments such as coursework reports and dissertation theses, with detrimental effects on the accuracy of student results and individual learning.
Many students acquire good programming skills at university and are exposed to an array of relevant technologies. However, they often struggle to demonstrate their readiness to work within industry. Could a virtual software house, run by staff and mentored by alumni from industry, attract real clients with real problems that students could be recruited to work on?
Cyber Job Analytics (Status : Pending)
Technologies (e.g., programming languages, operating systems, frameworks, APIs, service platforms etc.) are foundational to many career pathways. However, identifying and determining which technologies to learn, given that learning curves can often be costly, complicated and time consuming; particularly for students, graduates and career changers.
A common approach when using virtual machines for assessments is to utilise a "one size fits all " template. Particularly when working with large student cohorts. However, this approach can often introduce limits to the scope, scale and creativity of authentic assessments, due to a lack of differentiation in assessment scope.
Assessment feedback should be clear, specific, objective, constructive, actionable, supportive, and prompt. Meeting this requirement for small cohorts is typically feasible. However, meeting this requirement for large cohorts across multiple assessments is not only challenging, but potentially detrimental to student outcomes for multiple reasons.
How do we create and test a lesson plan for 11-12 year-olds, that introduces them to concepts of cyber security, in a way that is non-technical, intuitive, self-directed and contextual to their generation?