The observation is due to perhaps Charles Stross but the argument goes as follows: an AI tasked with efficient paperclip manufacturing may (mistakenly) destroy all matter in the universe in order to convert it to paperclips, thereby faithfully carrying out its orders. Somewhat similarly, modern corporations, which have been (legally) tasked with maximizing shareholder value are carrying out their duties faithfully, without worrying about attendant poor effects on the environment etc. From this perspective, corporations are similar to AIs and cannot override their programming (even if made aware) due to the imperative logic of goal/objective function optimization.
Is this a valid characterization of a corporation (or indeed an AI)? Why or why not? Explain in brief. (10 points)