人工智能(Artificial Intelligence)概述.pdf
《人工智能(Artificial Intelligence)概述.pdf》由会员分享,可在线阅读,更多相关《人工智能(Artificial Intelligence)概述.pdf(99页珍藏版)》请在淘文阁 - 分享文档赚钱的网站上搜索。
1、Artifi cial intelligence is the apex technology of the information era. In the latest in our Profi les in Innovation series, we examine how advances in machine learning and deep learning have combined with more powerful computing and an ever-expanding pool of data to bring AI within reach for compan
2、ies across industries. The development of AI-as-a-service has the potential to open new markets and disrupt the playing fi eld in cloud computing. We believe the ability to leverage AI will become a defi ning attribute of competitive advantage for companies in coming years and will usher in a resurg
3、ence in productivity. Heath P . Terry, CFA (212) 357-1849 Goldman, Sachs identify clusters of unusual behavior. Predictive. Predict the likelihood of customer or employee churn based on web activity and other metadata; predict health issues based on wearable data. What is General, Strong or True Art
4、ificial Intelligence? General, Strong, or True Artificial Intelligence are terms used for machine intelligence that fully replicates human intelligence including independent learning and decision making. While techniques like Whole Brain Emulation are being used to work towards the goal of General A
5、I, the amount of compute power required is still considered far beyond current technologies, making General AI largely theoretical for the time being. via:资料来源网络,起点学院学员收集 起点学院,互联网黄埔军校,打造最专业最系统的产品、运营、交互课程。 November 14, 2016 Profiles in Innovation Goldman Sachs Global Investment Research 11 Key driver
6、s of value creation We believe profit pool creation (and destruction) related to the AI theme is best analyzed by first breaking AI down into four key inputs: talent, data, infrastructure and silicon. These inputs also double as barriers to adoption. Talent AI (deep learning in particular) is hard.
7、Per our conversations with VCs and companies in the space, this has created a talent shortage and a competition for this talent among large internet and cloud computing vendors (Exhibit 5). AI talent is in high enough demand that “acquihires” are still a common means to acquire necessary talent. As
8、the technology and tooling matures, talent may become less of a bottleneck. However, we believe talent will migrate to interesting, differentiated data sets. Due to this, we believe large differentiated data sets are the most likely driver of growth and incremental profit dollars as we move into an
9、AI-centric world. Exhibit 5: A Scarcity of AI Talent is Driving M 0= least accurate, 100= most accurate Source: Department of Radiology at Massachusetts General Hospital and Harvard Medical School Most deep learning today is either supervised or semi-supervised, meaning all or some of the data utili
10、zed to train the model must be labeled by a human. Unsupervised machine learning is the current “holy grail” in AI, as raw un-labeled data could be utilized to train models. Broad adoption of deep learning will likely be tied to growth in large data sets (which is happening due to mobile and IoT) an
11、d to advances in unsupervised machine learning. However, we believe large differentiated data sets (electronic health records, omics data, geological data, weather data, etc.) will likely be a core driver of profit pool creation over the next decade. The amount of information created worldwide is ex
12、pected to increase at a CAGR of 36% through 2020, reaching 44 Zettabytes (44 billion GB), according to IDC. Increases in connected devices (consumer and industrial), machine-to-machine communication, and remote sensors are combining to create large data sets that can then be mined for insights and t
13、o train adaptive algorithms. Availability of data has also increased dramatically in the last decade, with census, labor, weather, and even genome data available for free online in large quantities. We are also seeing increased availability of satellite imagery, which requires a great deal of comput
14、e to fully analyze. The US Geological Surveys Landsat 7 and Landsat 8 satellites image the entire Earth every 8 days, and the USGS makes those images available for free though even when compressed, the ultra-high definition images are approximately 1GB each, in file size. Other companies, like Orbit
15、al Insights are aggregating image data and creating commercial solutions across multiple industries. Infrastructure Hardware and infrastructure software are necessary to make AI work. We believe infrastructure to support AI will rapidly become commoditized. This view is based on two observations: 1)
16、 cloud computing vendors are well positioned to extend their offerings into AI infrastructure, 2) open source (TensorFlow, Caffe, Spark, etc.) has emerged as the primary driver of software innovation in AI. To spur adoption of AI, we believe large cloud vendors will continue to open source infrastru
17、cture capabilities, limiting the potential for profit pool creation. Training Data Size 5102050100200 Brain0.33.3945.7159.0772.8298.44 Neck21.330.6379.9799.3499.7499.33 Shoulder2.9821.3969.6486.5795.5392.94 Chest23.3934.4562.5396.1895.2599.61 Abdomen0.13.2335.465.8391.0195.18 Pelvis01.1515.9955.983.
18、788.45 Average8.0117.3751.5477.1589.6895.67 via:资料来源网络,起点学院学员收集 起点学院,互联网黄埔军校,打造最专业最系统的产品、运营、交互课程。 November 14, 2016 Profiles in Innovation Goldman Sachs Global Investment Research 13 Exhibit 7: Internet Giants (such as Google) are spurring interest in AI via open sourcing technologies (such as Tenso
19、rFlow) GitHub repositories most starred 2015-2016 Source: GitHub Silicon The repurposing of GPUs for deep learning has been one of the key drivers of our current “AI Spring”. Within the AI/ML ecosystem, there are two primary applications that determine the performance of a neural network with each r
20、equiring a different resource setup. The first is the construction and use of a training algorithm. The training algorithm leverages a large (usually the larger, the better) data set to find correlations and build a model that can determine the probability of an output, given a new input. Training i
21、s very resource-intensive, and most modern training is done on GPU-powered systems. The use of models and algorithms once they have been trained is referred to as inference. Inference requires far less computing power, and typically combs through smaller, incremental data input sets. While some GPUs
22、 are optimized for inference (Nvidias P4 series and M4 series, for example) given the single-purpose nature of inference, specialized silicon is being developed specifically for that application, referred to as FPGAs (Field Programmable Gate Array) and ASICs (Application Specific Integrated Circuit)
23、. This type of integrated circuit was originally developed for prototyping CPUs, but is increasingly being used for inference in artificial intelligence. Googles Tensor Processing Unit, is an example of an ASIC purpose-built for AI and machine learning. Microsoft has been using FPGA chips for infere
24、nce, as well. Intel acquired FPGA manufacturer, Altera, in 2015 on the view that by 2020, a third of data centers could be leveraging FPGAs for specialized use cases. Xilinx, which pioneered commercially viable FPGAs in the 1980s, has pointed to the cloud and large data centers as a significant aven
- 配套讲稿:
如PPT文件的首页显示word图标,表示该PPT已包含配套word讲稿。双击word图标可打开word文档。
- 特殊限制:
部分文档作品中含有的国旗、国徽等图片,仅作为作品整体效果示例展示,禁止商用。设计者仅对作品中独创性部分享有著作权。
- 关 键 词:
- 人工智能Artificial Intelligence概述 人工智能 Artificial Intelligence 概述
限制150内