Dynamic BCW Panel Explores the Ethical Frontier of AI

From left, seated, Chrystal P. Mauro. IBM Consulting’s Senior Counsel. Amy B. Goldsmith, Chair of Cybersecurity, Data Management and Privacy practice at Tarter Krinsky & Drogin; Mairead Jones-Kennelly, Senior Counsel, The State University of New York and the AI Legal Institute at SUNY (ALIS); and Elaine C. Zacharakis, Health, Privacy, and Technology Consultant/Attorney at Zacharakis Loumbas Law LLC, Associate Professor and HIPAA Compliance Officer, New York Institute of Technology, Adjunct Professor, Pace Law School.
Panelists not pictured: Pace University Philosophy Professor James Brusseau PhD; Hannah Hage, Assistant Counsel, The State University of New York and the AI Legal Institute at SUNY (ALIS)
The Business Council of Westchester (BCW) hosted a dynamic panel discussion on Monday at Pace University, bringing together a diverse group of experts to explore the critical issue of establishing ethical standards and regulations for artificial intelligence.
“Governing the Future: The Ethical Frontier of AI Regulations and Policy” was part of the BCW’s AI Alliance 360 series. The panelists, including legal and educational experts from both the private and public sectors, grappled with the challenge of a rapidly evolving technology that is outpacing the guidelines intended to promote its responsible use. BCW President and CEO Marsha Gordon moderated the conversation.
“It’s an incredibly timely conversation that’s reshaping policy, practice and innovation across industries,” said Gordon, who began the event with a “coffee chat” with IBM Consulting’s Senior Counsel, Chrystal P. Mauro.
Mauro highlighted IBM’s commitment to transparency and testing in its AI products. “We have put out literature and studies on the five pillars of trust of ethical AI: fairness, robustness, explainability, privacy and transparency,” said Mauro. “Fairness is what are you using for your models? What are you training your models on? Is that data filled with biases and what kind of outcomes are you getting?”
Pace University Philosophy Professor James Brusseau PhD drew a distinction between the approaches to AI regulation in the United States and Europe. “The U.S. uses acceleration ethics. Europe uses precautionary ethics,” said Brusseau. “What (Americans) do is we roll out innovation, and then innovation creates benefits, but also creates risks or problems like privacy risks. Then we try to use a next step of innovation to solve that problem.” He noted that “Team Europe…is a more centralized system, and what they want to do is foresee possible ethical risks.”
Amy B. Goldsmith, Chair of Cybersecurity, Data Management and Privacy practice at Tarter Krinsky & Drogin, cautioned attendees against using public AI tools like ChatGPT for sensitive tasks. She shared an anecdote about an individual who was uploading contracts into a public language model, unaware of the potential for a breach. “If he’s uploading contracts into ChatGPT 4, which is a public language model, he is already breaching a contract because he’s violating the confidentiality clause, and God knows what other private information might be in that contract,” said Goldsmith.
Mairead Jones-Kennelly, Senior Counsel, The State University of New York and the AI Legal Institute at SUNY (ALIS), announced that ALIS will soon make a public repository of AI usage available. “That playbook has policies and guidance documents, general use principles, ethical guidelines and data governance for anybody to access,” said Jones-Kennelly.
Hannah Hage, Assistant Counsel, The State University of New York and the AI Legal Institute at SUNY (ALIS), addressed the challenge of applying existing state laws to AI. She noted that state law requires human oversight when AI makes decisions that affect people, such as medical diagnoses or financial aid. However, she pointed out the lack of clarity in the law. “There is a lack of guidance on what human oversight means. How do we comply with that? What does it look like?” Hage said. “Another thing is an appeals process. Once a decision gets made about an individual, is there an ability for that individual to appeal that decision and have access to the information that the AI tool used?”
On the topic of educating students, Keith Landa, Director of the Teaching, Learning, and Technology Center at Purchase College, said that the decision to use AI in the classroom is up to individual educators. “What we want is AI in the classroom to support student learning and attainment of the learning outcomes, and not short circuit that learning,” said Landa, emphasizing the importance of preparing students for an “AI-powered world.”
David Sachs Ed.D., a Professor at the Seidenberg School of CSIS at Pace University, recommended that everyone, not just students, use platforms that provide transparency, such as Perplexity. “I use it because it gives you references for everything that that it does. It gives you the hyperlinks down at the bottom, and you can click and verify,” said Sachs.
Elaine C. Zacharakis, a Health, Privacy, and Technology Consultant/Attorney at Zacharakis Loumbas Law LLC, Associate Professor and HIPAA Compliance Officer, New York Institute of Technology, Adjunct Professor, Pace Law School, spoke about the growing role of AI in healthcare. “The benefits of AI for healthcare are tremendous in terms of analyzing big data, like what cancer therapy works for this type of patient,” said Zacharakis, while also acknowledging that “there is also a lot of risk to privacy.”
Rep. George Latimer noted that the AI revolution will deliver wrenching change.
“Artificial intelligence can go places that I can’t even imagine,” said Latimer. “There will be dislocation of career paths because of this. How do we anticipate and respond to that? That is an issue for business, ethicists and for government.”
Pace University President Marvin Krislov said that AI is a continuing topic at his school’s AI Lab. Later this month the school will host an AI conference for educators and on September 18 Google will offer a Gemini AI pop-up on the campus, one of only 10 universities chosen for the opportunity.
“With so much activity, people around here have started asking, where are we? And are we quietly rebranding as AI University? No. We’re not rebranding…but Pace has always been a place that focused on not only giving students a great education but giving them the tools they need to succeed in life and in their careers.”
Regeneron, Montefiore Einstein, CCLEAN, Dorf Nelson & Zauderer LLP, the Westchester County Office of Economic Development and Verizon sponsored “Governing the Future: The Ethical Frontier of AI Regulations and Policy.”
Similar News Items
Bringing its exceptional brand of care to assisted living and memory care, BCW Member Broadview at Purchase College held a grand opening ceremony on Wednesday for its High Point Center for Care. More than 200 people attended the reception and ribbon-cutting ceremony for the new center, which incorporates Broadview’s signature concept of intergenerational and lifelong […]
Governor Kathy Hochul has announced that $200 million in funding is available through the State’s two signature downtown revitalization and economic development programs—$100 million each for Round 9 of the Downtown Revitalization Initiative (DRI) and Round 4 of the NY Forward program, which focuses on revitalizing smaller and rural downtowns. Together, the two programs have […]
The Business Council of Westchester has joined with nearly 50 business and civic organizations to announce the formation of the Build More New York coalition to support jobs, construction and development across the state by reforming the outdated Scaffold Law. Build More New York is rallying around Rep. Nick Langworthy’s Infrastructure Expansion Act, which will […]
Become A Member
Join the county’s largest and most influential business organization today.
JOIN NOW!