PX14A6G 1 bowy_px14a6g.htm BOWYER RESEARCH - PX14A6G

 

Summary

Bowyer Research urges shareholders to vote AGAINST Proposal 7, on the 2024 proxy ballot of Apple, Inc. (“Apple” or the “Company”) The “Resolved” clause of Proposal No. 7 states:

RESOLVED: Shareholders request that Apple Inc. prepare a transparency report on the company’s use of Artificial Intelligence (“AI”) in its business operations and disclose any ethical guidelines that the company has adopted regarding the company’s use of AI technology. This report shall be made publicly available to the company’s shareholders on the company’s website, be prepared at a reasonable cost, and omit any information that is proprietary, privileged, or violative of contractual obligations.

 

The supporting statement to this Proposal1, submitted by Segal Marco Advisors (the “Proponent”), contends that:

1.Increased use of artificial intelligence/machine learning models (further referred to holistically as AI) present serious “social policy issues.” 

1.Failure to adopt a specific set of ethical guidelines regarding AI will lead to costly labor disruptions. 

2.Such ethical systems require Apple not to use AI to “replace or supplant the creative work of professional writers.” 

 

These assertions, however, are based on false argumentation and analysis that:

 

1.Demand concrete AI self-regulation in a quickly-evolving phase of the technology’s development. 

2.Unfairly malign and oppose a primary function of developing AI in the technological workspace. 

3.Presuppose a Luddite, restrictionist framework to Apple’s AI outlook. 


1https://www.sec.gov/Archives/edgar/data/320193/000130817924000010/laapl2024_def14a.pdf


 

1.The Proposal demands concrete AI self-regulation in a quickly-evolving phase of the technology’s development. 

 

As asserted in Proposal No. 7’s supporting statement, “the use of AI in human resources decisions may raise concerns about discrimination or bias against employees. The use of AI to automate jobs may result in mass layoffs and the closing of entire facilities.” In the Proponent’s view, the potential of AI to disrupt existing workplace infrastructure is concerning enough to justify the type of preemptive corporate self-regulation capable of resolving AI conflicts before they start. Yet, preemptive corporate self-regulation of AI technology, systems that rely on an ever-evolving knowledge base, can easily cross the line from self-regulation to self-obstruction of the legitimate goals of AI technology. The Proponent’s view belies an assumption that AI regulation of the current space will achieve requisite goals as regards a more developed, later state of AI. This assumption suffers from a profound lack of evidence—the Proponent is asking for too much self-regulation too soon. Apple itself has hinted at new AI-related products in the near future.2 Apple and its developers are not prescient. Regulating such products in the current moment, would jeopardize the Company’s ability to successfully rollout AI products by rendering Apple developers servant to the parameters of their own regulatory commitments, thereby hindering innovation and decreasing the ability of Company developers to provide high-quality products for its users.

 

2.The Proposal unfairly maligns and opposes a primary function of developing AI in the technological workspace. 

 

In its supporting statement, the Proposal states that AI models ought not to be used to “replace or supplant the creative work of professional writers.” The Proponent’s statement belies a belief that AI’s ability to replace the product of human effort is a de facto negative. This belief is fundamentally ignorant of the purpose of machine learning models: at its most basic level, AI is designed to alleviate human workload—by means of example, ChatGPT definitionally replaces the creative work of a writer—rendering the Proposal a philosophical argument against AI entirely, as opposed to a mere desire for further self-regulation.


2 https://www.cnbc.com/2024/02/01/tim-cook-teases-apple-ai-announcement-later-this-year.html


 

Such arguments place the Proposal at odds with not only Apple’s ostensible trajectory but its existing iOS base of technology, including facial and animal recognition software.3 To comply with the Proponent’s demand, Apple would have to not only cease further AI development but roll back existing technological features, a move that would generate widespread reputational risk for the Company and diminish Apple’s status as an industry leader.

 

3.The Proposal presupposes a Luddite, restrictionist framework to Apple’s AI outlook. 

 

In its supporting statement, Proposal No. 7 asserts that “adopting an ethical framework for the use of AI technology will strengthen [Apple’s] position as a responsible and sustainable leader in its industry.” Yet, as demonstrated in point 2 above, the Proponent is ideologically opposed to even comparatively rudimentary uses of AI, i.e. replacing the work of writers and (to a degree) artists, an area of innovation in which all of Apple’s major competitors are making notable strides.

 

For Apple to be a “responsible and sustainable” industry leader, it must first remain a leader. The type of excessive, premature demands for widely-encompassing self-regulation for which the Proponent advocates would only serve to jeopardize that leader status. Such demands, far from elevating Apple’s ability to create value for its shareholders, instead represent the kind of activist demands capable of hijacking a company’s operational philosophy.

 

Conclusion

As demonstrated in this report, Proposal No. 7 fails to effectively advocate for AI self-regulation by:

 

1.Demanding concrete AI self-regulation in a quickly-evolving phase of the technology’s development. 


3 https://www.cnbc.com/2019/06/26/apple-adds-new-image-recognition-features-for-pets-to-ios.html


2.Unfairly maligning and opposing a primary function of developing AI in the technological workspace. 

3.Presupposing a Luddite, restrictionist framework to Apple’s AI outlook. 

 

No shareholder in a business truly invests “in AI.” They only invest in a specific company’s use of machine learning models, from OpenAI to Google to Apple. Given this reality, Apple’s operational philosophy regarding AI matters now more than ever before—as per its fiduciary duty, it behooves the company to resist the temptation to capitulate to blind calls for preemptive self-regulation or activist demands to ‘turn back the clock’ on AI development. Handling this next frontier of technology ethically includes having the will to truly pursue the positive potential of machine learning—by allowing such pursuits to be tarnished by ideological demands, Apple is diminishing its ability to provide quality products for its users and continuing value for its shareholders.