San Francisco State University and other institutions are expanding courses, certificates, and master’s programs focused on AI ethics and compliance, reflecting employer demand for workers who can evaluate bias, privacy risks, and reliability of AI outputs. Denise Kleinrichert, a management professor at SFSU, said the goal is to ensure students understand how AI works, where it can be biased, and how to recognize incorrect or harmful outputs. The article cites growth in job postings that require generative-AI skills beyond technical roles and notes that more roles now reference AI ethics expertise. It frames AI ethics as interdisciplinary, intersecting data science, business, and philosophy. University offerings range from programs designed for non-computer-science students to field-specific ethics training. For universities, the expansion signals a shift from standalone AI training toward governance-oriented curriculum—training students to work with AI responsibly while reducing compliance risk for employers and public-facing institutions.