Personal tools
You are here: Home EITA Emerging Technologies Research and Ventures New Media and New Digital Economy Research and Ventures The Theme - New Media and New Digital Economy Ventures

The Theme - New Media and New Digital Economy Ventures

 
San_Francisco_California_072314
(San Francisco, California, U.S.A. - Jeffrey M. Wang)

 

"The Rise of the New Digital Economy: Trends, Opportunities and Challenges"



<DRAFT>
 

1. Overview


The world is changing fast. Technology has permeated every aspect of modern life, both personal and professional. We are at the dawn of the Fourth Industrial Revolution, which will bring together digital, biological and physical technologies in new and powerful combinations. Rapid developments in technology and science are changing the way we live, work and do business. These changes come with challenges for our industries, work places and communities. Digital technologies have immense potential to drive competition, innovation and productivity.

Innovation in the business world is accelerating exponentially, with new, disruptive technologies and trends emerging that are fundamentally changing how businesses and the global economy operate. Today, most business professionals spend almost every waking second of their day either interacting with a computer or carrying one on their person - and many people even sleep tethered to a computer via wearable devices that track activity and sleep patterns.  

Today’s new digital economy is forcing organizations to think faster, more openly and more flexibly. By understanding emerging digital ecosystems, businesses can implement strategies that differentiate the customer experience and drive competitive advantage. The strategies for succeeding in the knowledge economy are predicated on the notions that knowledge and information are costly to generate and can be protected. It makes sense to build your enterprise’s competitive differentiation around its knowledge capital only if that information is unique to the firm. 

- Digital Changes Driving the New Digital Economy:

Today, the notion that knowledge and information are costly and protectable is being challenged by the four forces of digital change. In combination, these forces are pushing the knowledge economy to the margins and giving rise to the "New" Digital Economy (NDE). The Internet has underpinned, enabled and accelerated many of these trends, and it lies at the core of the NDE as well.

  • Sharing, collaboration, and on-demand. Digital technologies make it easy to share information freely. Collaboration platforms create new pathways for knowledge production that depend on connections between people rather than on hierarchical controls. We are now seeing products become platforms that drive impressive new services in ways that were just not imagined when the products were first designed. A number of new commercial online services have emerged in recent years, each promising to reshape some aspect of the way people go about their lives. Some of these services offer on-demand access to goods or services with the click of a mouse or swipe of a smartphone app. Others promote the commercialized sharing of products or expertise, while still others seek to connect communities of interest and solve problems using open, collaborative platforms.
  • Hyperconnectivity. Information systems, particularly the Internet of Things (IoT), are generating powerful live information flows. Our enthusiasm for creating these data streams - from devices and ourselves - implies that information at the point of creation is more valuable than any legacy knowledge. Soon everything will be connected; every asset, supplier, worker and stakeholder. This means products can now work together to get jobs done faster and even safer than ever before. However, hyperconnectivity driven by the rise of the digital-everything economy and IoT will soon disrupt the cybersecurity landscape in a way that hasn’t been seen in the past. 
  • Artificial intelligence (AI) and machine learning. AI and other advanced analytics technologies decrease information processing costs. With the proper dataset behind it, AI can help alleviate many repetitive and redundant tasks, changing the way humans approach work. Today, AIs can write poetry and songs, discover new compounds in medicine, can be used to create information and to counter fake news. It may not be long before an AI receives a patient. Just as companies used capital to accelerate through the experience curve in manufacturing, it is now possible to use machine learning to power through the experience curve of the digital economy. We are now in a world where things get done faster, more easily, with more accuracy, and based on better knowledge. Everything we do is touched in some way by AI. AI is becoming more and more ingrained into our lives.
  • Crowdsourcing the world’s cognitive surplus. Wikipedia, YouTube, and Linux are early examples of using collaboration technology to harness latent cognitive capacity around the globe to topple traditional sources of competitive advantage. Tapping into the intelligence of groups - within a company or around the globe - can help organizations combat bias, make better decisions, and compete for talent and ideas with the help of artificial intelligence. Think of crowdsourcing as applying the principles of the sharing economy to cognitive surpluses. Many people have thoughts, ideas, and skills with real business value that often go unused. Companies can tap into those surpluses both internally and externally, often with the help of technology. In fact, Alphabet’s former executive chairman, Eric Schmidt, has predicted that the next $100 billion company will likely result from the wisdom of the many. 

Together, these forces mean all knowledge has true competitive value only at the moment it is created. It decays quickly into legacy knowledge. The only way to find competitive advantage in this digital economy is to become a Live Business: to learn to use information in the moment to make decisions, meet demand, and respond to customers. The difference between thriving in the legacy knowledge economy and thriving in the new digital economy is the speed at which companies can act on data from all sources.  

- What is the "New" Digital Economy (NDE)?

The New Digital Economy (NDE) is emerging from a combination of technologies, mainly from the ICT (Information and Communications Technology) space, that are becoming pervasive across mechanical systems, communications, infrastructure, and the built environment, and thus playing an increasingly important role, not only in social and political life, but in research, manufacturing, services, transportation, and even agriculture.

The technologies underpinning the NDE, most importantly, include: advanced robotics and factory automation (sometimes referred to as advanced manufacturing); new sources of data from mobile and ubiquitous Internet connectivity (sometimes referred to as the Internet of Things); cloud computing; big data analytics and artificial intelligence (AI). 

"The main driver of the NDE is the continued exponential improvement in the cost-performance of information and communications technology (ICT), mainly microelectronics, following Moore’s Law. This is not new. The digitization of design, advanced manufacturing, robotics, communications, and distributed computer networking (e.g. the Internet) have been altering innovation processes, the content of tasks, and the possibilities for the relocation of work for decades. However, three features of the NDE are relatively novel. First, new sources of data, from smart phones to factory sensors, are sending vast quantities of data into the “cloud,” where they can be analysed to generate new insights, products, and services. Second, new business models based on technology and product platforms - platform innovation, platform ownership, and platform complimenting - are significantly altering the organization of industries and the terms of competition in a range of leading-edge industries and product categories. Third, the performance of ICT hardware and software has advanced to the point where artificial intelligence and machine learning applications are proliferating. What these novel features share is reliance on very advanced and nearly ubiquitous ICT, embedded in a growing platform ecosystem characterized by high levels of interoperability and modularity." - [United Nations, UNCTAD]

The rise of new digital industrial technology, known as Industry 4.0, is a transformation that makes it possible to gather and analyze data across machines, enabling faster, more flexible, and more efficient processes to produce higher-quality goods at reduced costs. This manufacturing revolution will increase productivity, shift economics, foster industrial growth, and modify the profile of the workforce—ultimately changing the competitiveness of companies and regions. 

- What is New Media?

New Media is a 21st Century catchall term used to define all that is related to the Internet and the interplay between technology, images and sound. In fact, the definition of new media changes daily, and will continue to do so. New media evolves and morphs continuously. What it will be tomorrow is virtually unpredictable for most of us, but we do know that it will continue to evolve in fast and furious ways. 

Digital technologies underpin innovation and competitiveness across private and public sectors and enable scientific progress in all disciplines. ICT and Digital Media are now integrated into almost every technology, industry and job. New media are forms of media that are native to computers, computational and relying on computers for distribution. Currently, some examples of new media are websites, mobile apps, virtual worlds, multimedia, computer games, human-computer interface, computer animation and interactive computer installations. 

- Why is Digitization So Important?

The New Digital Economy (NDE) is not only reshaping the economy and changing people's day-to-day life, but also creating enormous business opportunities worldwide. To meet the information requirements necessary to capitalize on the NDE, firms must first digitize, connect and collect data on all of their assets, suppliers, workers and stakeholders. They also need high-speed platform technology capable of quickly analyzing the data from multiple angles and combining internal content with external information. 
 
One of the main advantages of digitization is the ‘shortcut’ it offers when developing useful services for the people, especially where resources are few. However, digitization is a means to achieve a goal and not a goal itself. The goal of digitizing and transforming industry and services aims to establish the next generation digital platforms and re-build the underlying digital supply chain on which all economic sectors are dependent. It should enable all economic sectors and application areas to adapt, transform and benefit from digitization, notably by allowing also smaller players to capture value. Digital platforms are becoming a key factor in one economic sector after another, enabling new types of services and applications, altering business models and creating new marketplaces.  
 
Eiffel_Tower_Paris_082318A
(Eiffel Tower, Paris, France - Ching-Fuh Lin)

2. Technologies for Digitizing and Transforming Industry and Services

 
Progress in technologies such as photonics, micro- and nanoelectronics, smart systems and robotics is changing the way we design, produce, commercialize and generate value from products and related services. 
 
- Cyber-Physical Systems (CPS):
 
Steady advances in controls, communications, and computing are enabling new forms of cyber-physical systems (CPS), and are simultaneously redefining the role and position of humans in broad areas of applications, and blurring the traditional boundaries between humans and technology. 
 
CPS are systems where real-time computing and physical systems interact tightly. These complex and physically-entangled systems are of crucial importance for the quality of people's lives and for the economy. CPS are engineered systems that are built from, and depend upon, the seamless integration of computational algorithms and physical components. We here take the concept of CPS as meaning large complex physical systems that are interacting with a considerable number of distributed computing elements for monitoring, control and management which can exchange information between them and with human users.
 
Advances in CPS will enable capability, adaptability, scalability, resiliency, safety, security, and usability that will far exceed the simple embedded systems of today. CPS technology will transform the way people interact with engineered systems - just as the Internet has transformed the way people interact with information. New smart CPS will drive innovation and competition in sectors such as agriculture, energy, transportation, building design and automation, healthcare, and manufacturing. Moreover, the integration of artificial intelligence with CPS creates new research opportunities with major societal implications. 
 
CPS has provided an outstanding foundation to build advanced industrial systems and applications by integrating innovative functionalities through Internet of Things (IoT) and Web of Things (WoB) to enable connection of the operations of the physical reality with computing and communication infrastructures. A wide range of industrial CPS-based applications have been developed and deployed in Industry 4.0. 
 
Today's world is a network of interconnected, embedded computer systems with components ranging in size and complexity. Researchers and hackers have shown that networked embedded systems are vulnerable to remote attack. Technology for the construction of safe and secure cyber-physical systems is badly needed.

- Flexible and Wearable Electronics:

Flexible and wearable electronics combines new and traditional materials with large area processes to fabricate lightweight, flexible, printed and multi-functional electronic products.
 
[Stanford E-Wear]: "Wearable electronics has emerged as a new form of electronics that combines sensors and wireless communication to allow monitoring of vital information autonomously. Unlike typical sensor networks, wearable electronics need to form conformal and intimate contact with objects to be monitored. Furthermore, they have to be comfortable to wear while providing accurate information." 
 
Wearable electronics are smart electronic devices that can be connected to the Internet and be worn on the body as accessories. These devices are a key segment of loT devices, and they can exchange data through Internet with the user and other connected devices. Applications for wearable electronics range from health monitoring, disease detection, robotics, robotics surgery, implantable electronics, driverless cars, structural monitoring, virtual reality, augmented reality, etc..
 
Wearable devices offer benefits like optimized decision-making, ease of handling emergencies, cost cutting, enhanced quality of living, remote control access, healthy lifestyle, time management, commercial benefit, and better safety.  
 
Along with the explosion of interest in wearable electronics in recent years, numerous challenges nonetheless remain before wearable electronics become a truly commercializable technology. One major challenge is the highly interdisciplinary nature of the field, which mandates the convergence of many disciplines, notably from materials, devices, system integration, software and application verification. The impact is far beyond health care. It will improve everything from the environment to defense, the economy, and energy production.
 
- Photonics -  An Enabling Technology:

Photonics is the technology of generating and harnessing light and other forms of radiant energy whose quantum unit is the photon. Photonics involves cutting-edge uses of lasers, optics, fiber-optics, and electro-optical devices in numerous and diverse fields of technology - alternate energy, manufacturing, health care, telecommunication, environmental monitoring, homeland security, aerospace, solid state lighting, and many others.
 
Lasers and other light beams are the “preferred carriers” of energy and information for many applications. For example: Lasers are used for welding, drilling, and cutting of metals, fabrics, human tissue, and other materials; Coherent light beams (lasers) have a high bandwidth and can carry far more information than radio frequency and microwave signals; Fiber optics allow light to be “piped” through cables.
 
Research in photonics ranges in scope from fundamentally new tools, such as small-footprint, high-throughput multiphoton microscopes, through exceptionally high-power semiconductor lasers, to components and systems for next-generation optical networks for both the Internet and data centers, and into consumer equipment like 3-D displays. New areas are constantly explored by research teams worldwide, as photonics becomes more pervasive in our lives. Communications, displays, medicine, manufacturing and imaging are just a few applications.
  
- Unconventional Nanoelectronics:

Shrinking transistors have powered 50 years of advances in computing - but now other ways must be found to make computers more capable. Mobile apps, video games, spreadsheets, and accurate weather forecasts: that’s just a sampling of the life-changing things made possible by the reliable, exponential growth in the power of computer chips over the past five decades. The continual cramming of more silicon transistors onto chips has been the feedstock of exuberant innovation in computing. That could stymie future advances in electronics, unless new architectures and designs can allow progress in chip performance to continue. There are also worries about the rising cost of designing integrated circuits. Future generations of electronics will be based on new devices and circuit architectures, operating on physical principles that cannot be exploited by conventional transistors. Research scientists worldwide seek the next device that will propel computing beyond the limitations of current technology.

[Nanoelectronics for 2020 and Beyond]: "The semiconductor industry is a major driver of the modern U.S. economy and has accounted for a large portion of the productivity gains that have characterized the global economy since the 1990s. Recent advances in this area have been fueled by what is known as Moore’s Law scaling, which has successfully predicted the exponential increase in the performance of computing devices for the last 40 years. This gain has been achieved due to ever-increasing miniaturization of semiconductor processing and memory devices (smaller and faster switches and transistors). Continuing to shrink the dimensions of electronic devices is important in order to further increase processor speed, reduce device switching energy, increase system functionality, and reduce manufacturing cost per bit. However, as the dimensions of critical elements of devices approach atomic size, quantum tunneling and other quantum effects degrade and ultimately prohibit the operations of conventional devices. Researchers are therefore pursuing more radical approaches to overcome these fundamental physics limitations.

Candidate approaches include different types of logic using cellular automata or quantum entanglement and superposition; 3D spatial architectures; and information-carrying variables other than electron charge, such as photon polarization, electron spin, and position and states of atoms and molecules. Approaches based on nanoscale science, engineering, and technology are most promising for realizing these radical changes and are expected to change the very nature of electronics and the essence of how electronic devices are manufactured. Rapidly reinforcing domestic R&D successes in these arenas could establish a U.S. domestic manufacturing base that will dominate 21st-century electronics commerce. The goal of this initiative is to accelerate the discovery and use of novel nanoscale fabrication processes and innovative concepts to produce revolutionary materials, devices, systems, and architectures to advance the field of nanoelectronics."
 
- Electronic Smart Systems (ESS):
 
The technology area Electronic Smart Systems (ESS) focuses on the challenges that the ongoing digitization of society introduces by the deep penetration of embedded sensing, acting and communicating electronics in our environment. Things become smart and connected, sensor systems and smart things provide the sensing and interacting edges that are bringing the entire world online. Embedded electronics become more pervasive and provide an opportunity for a disruptive wave of innovation of our daily living. 
 
Smart systems incorporate functions of sensing, actuation, and control in order to describe and analyze a situation, and make decisions based on the available data in a predictive or adaptive manner, thereby performing smart actions. In most cases the “smartness” of the system can be attributed to autonomous operation based on closed loop control, energy efficiency, and networking capabilities. A lot of smart systems evolved from microsystems. They combine technologies and components from microsystems technology (miniaturized electric, mechanical, optical, and fluidic devices) with other disciplines like biology, chemistry, nanoscience, or cognitive sciences.

Electronic smart systems identify a broad class of intelligent and miniaturized devices that are usually energy-autonomous and ubiquitously connected. In order to support these functions like sensing, actuation, and control, electronic smart systems must include sophisticated and heterogeneous components and subsystems, such as digital signal processing devices, analog devices for RF and wireless communication, discrete elements, application-specific sensors and actuators, energy sources, and energy storage devices. These systems take advantage of the progress achieved in miniaturization of electronic systems, and are highly energy-efficient and increasingly often energy-autonomous, and can communicate with their environment.
 
Thanks to their heterogeneous nature, smart embedded and cyber-physical applications are able to deliver a wide range of services, and their application may lead to provide solutions to address the grand social, economic, and environmental challenges such as environmental and pollution control, energy efficiency at various scales, aging populations and demographic change, risk of industrial decline, security from micro- to macro-level, safety in transportation, increased needs for the mobility of people and goods, health and lifestyle improvements, just to name the most relevant.
 
The goals is to develop and validate a new generation of cost-effective ESS technologies integrating hardware technologies across multiple fields. This massive integration of electronics everywhere introduces challenges like: integration, miniaturization, building practice, new sensors, low energy consumption, electromagnetic interference (EMI), architectures for high performance computing, resource efficient communication and affordable components. 
 
- Security and Resilience for Collaborative Manufacturing Environments:

The widespread adoption by manufacturing industry around the world of ICT is now paving the way for disruptive approaches to development, production and the entire logistics chain (i.e., Industry 4.0 - digitization of industrial manufacturing). This is increasingly blurring the boundaries between the real world and the virtual world in what are known as cyber-physical production systems (CPPSs). At the same time a new operational risk for connected, smart manufacturers and digital supply networks appears, and this is cyber. The interconnected nature of Industry 4.0-driven operations and the pace of digital transformation mean that cyberattacks can have far more extensive effects than ever before. 
 
The technological developments which are at the base of Industry 4.0 do raise at the same time a vast number of associated of security concerns. Unfortunately, intruders will not stop trying to find new ways of breaking into business networks. Attacks specifically designed to penetrate industrial control systems present a threat to production facilities. Infected computers can be controlled remotely and their data stolen. As the malware exploits unknown security holes, firewalls and network monitoring software are unable to detect it. 
 
Cyber risks in the age of Industry 4.0 extend beyond the supply network and manufacturing, however, to the product itself. As products are increasingly connected – both to each other and, at times, even back to the manufacturer and supply network – cyber risk no longer ends once a product has been sold. Connected objects also have a risk level, because IoT devices often present significant cyber risks. IoT devices that perform some of the most critical and sensitive tasks in industry are often the most vulnerable devices found on a network. Therefore, an integrated approach to protecting devices must be taken. 
 
The nature of cyber risks in Industry 4.0 thus is largely dependent on the particular industrial portfolio and therefore requires adequate action from the concerned industrial decision making factors. However, given the fact that industrial production is governed by a number of regulations industrial cyber risks should also be a concern for regulators.
 
- Artificial Intelligence (AI), Robotics, and Application Areas:

Artificial Intelligence (AI) is advancing at breakneck speed. Technical advances are making it possible for non-experts to apply AI in their work, accelerating the pace at which new AI solutions are deployed. The pace of automation that this technology is fueling will reach every corner of the global economy.
 
While robots originated in large-scale mass manufacturing, they are now spreading to more and more application areas. In these new settings, robots are often faced with new technical and non-technical challenges. Through interdisciplinary research across technological and sector-specific fields, research scientists drive innovation and new discoveries across the robotics spectrum - from large-scale automation and autonomous vehicles to personalized robotic learning and engagement applications or systems. Intelligence is moving towards edge devices. Increased computing power and sensor data along with improved AI algorithms are driving the trend towards machine learning be run on the end device, such as smartphones or automobiles, rather than in the cloud. For example,
 
  • Robotic process automation, alongside blockchain, AI, cognitive computing and the Internet of Things (IoT), is one of the new and emerging technologies expected to profoundly impact and transform the workforce of the future across the financial services sector. Robotic Process Automation (RPA) is quickly becoming the go-to solution for financial institutions that want to improve digital speed to market and cost take outs.
  • AI and robotics are transforming healthcare. AI is getting increasingly sophisticated at doing what humans do, but more efficiently, more quickly and at a lower cost. The potential for both AI and robotics in healthcare is vast. Just like in our every-day lives, AI and robotics are increasingly a part of our healthcare eco-system.
  • The food industry is being revolutionized by robotics and automation. There are real problems in modern agriculture. Traditional farming methods struggle to keep up with the efficiencies required by the market. Farmers in developed countries are suffering from a lack of workforce. The rise of automated farming is an attempt to solve these problems by using robotics and advanced sensing. Following 5 ways robotics is changing the food industry: Nursery Automation, Autonomous Precision Seeding, Crop Monitoring and Analysis, Fertilizing and Irrigation, Crop Weeding and Spraying.
 
Toronto_Canada
(Toronto, Canada - Wei-Jiun Su)

3. Data Infrastructure: HPC, Big Data and Cloud Technologies

 
- HPC, Big Data and Cloud Computing: the way forward to the future of mankind:
 
Progress for both science and mankind is going to depend more and more on “supercomputer brains” that can process large amounts of data in real time, providing them with a meaning and - subsequently - turning it into actionable knowledge. 
 
The Internet of Things and the convergence of HPC, big data and cloud computing technologies are enabling the emergence of a wide range of innovations. Building industrial large-scale application test-beds that integrate such technologies and that make best use of currently available HPC and data infrastructures will accelerate the pace of digitization and the innovation potential in key industry sectors (for example, healthcare, manufacturing, energy, finance & insurance, agri-food, space and security).
 
- High Performance and Super Computing:
 
In the Age of Internet Computing, billions of people use the Internet every day. As a result, supercomputer sites and large data centers must provide high-performance computing services to huge numbers of Internet users concurrently. We have to upgrade data centers using fast servers, storage systems, and high-bandwidth networks. The purpose is to advance network-based computing and web services with the emerging new technologies.
 
The general computing trend is to leverage shared web resources and massive amounts of data over the Internet. The evolutionary trend towards parallel, distributed, and cloud computing with clusters, MPPS (Massively Parallel Processing), P2P (Peer-to-Peer) networks, grids, clouds, web services, and the Internet of Things.
 
"Supercomputer" is a general term for computing systems capable of sustaining high-performance computing applications that require a large number of processors, shared or distributed memory, and multiple disks. Supercomputers are primarily are designed to be used in enterprises and organizations that require massive computing power. A supercomputer incorporates architectural and operational principles from parallel and grid processing, where a process is simultaneously executed on thousands of processors or is distributed among them.
 
Performance of a supercomputer is measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). As of today, there are supercomputers which can perform up to nearly a hundred quadrillions of FLOPS, measured in P(eta)FLOPS. As of today, all of the world's fastest 500 supercomputers run Linux-based operating systems. 
 
- Turning Big Data into Smart Data:

Big data refers to extremely large datasets that are difficult to analyze with traditional tools. It is often boiled down to a few varieties of data generated by machines, people, and organizations. Big data is being generated by everything around us at all times. Every digital process and social media exchange produces it. Systems, sensors and mobile devices transmit it. Big data can be either structured, semi-structured, or unstructured. IDC estimates that 90 percent of big data is unstructured data. 

Big data is arriving from multiple sources at an alarming velocity, volume and variety. To extract meaningful value from big data, you need optimal processing power, analytics capabilities and skills. In most business use cases, any single source of data on its own is not useful. Real value often comes from combining these streams of big data sources with each other and analyzing them to generate new insights. 

Analyzing large data sets, so-called big data, will become a key basis of competition, underpinning new waves of productivity growth, innovation, and consumer surplus. Big data must pass through a series of steps before it generates value. Namely data access, storage, cleaning, and analysis. 
 
- Cloud Technologies:
 
Cloud computing is the delivery of computing services—servers, storage, databases, networking, software, analytics, and more - over the Internet (“the cloud”). Companies offering these computing services are called cloud providers and typically charge for cloud computing services based on usage, similar to how you’re billed for water or electricity at home.
 
Most cloud computing services fall into three broad categories: infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (Saas). These are sometimes called the cloud computing stack, because they build on top of one another. There are three different ways to deploy cloud computing resources: public cloud, private cloud, and hybrid cloud. Knowing what they are and how they’re different makes it easier to accomplish your business goals. 
 
Cloud computing provides a simple way to access servers, storage, databases and a broad set of application services over the Internet. A Cloud services platform such as Amazon Web Services owns and maintains the network-connected hardware required for these application services, while you provision and use what you need via a web application. 
 

4. Artificial Intelligence, Machine Learning, and Neural Networks

 

- Artificial Intelligence (AI): 

Artificial Intelligence (AI) is the broader concept of machines being able to carry out tasks in a way that we would consider “smart”. Machine Learning (ML) is a current application of AI based around the idea that we should really just be able to give machines access to data and let them learn for themselves.

Over the past few years AI has exploded, and especially since 2015. Much of that has to do with the wide availability of GPUs that make parallel processing ever faster, cheaper, and more powerful. It also has to do with the simultaneous one-two punch of practically infinite storage and a flood of data of every stripe (that whole Big Data movement) - images, text, transactions, mapping data, you name it.

The most important thing to understand about AI is that it is not a static formula to solve. It’s a constantly evolving system designed to identify, sort, and present the data that is most likely to meet the needs of users at that specific time, based on a multitude of variables that go far beyond just a simple keyword phrase. 

AI is trained by using known data, such as: content, links, user behavior, trust, citations, patterns, and then analyzing that data using user experience, big data, and machine learning to develop new ranking factors capable of producing the results most likely to meet user needs.

The goal of Artificial Intelligence (AI) is to understand intelligence by constructing computational models of intelligent behavior. This entails developing and testing falsifiable algorithmic theories of (aspects of) intelligent behavior, including sensing, representation, reasoning, learning, decision-making, communication, coordination, action, and interaction. AI is also concerned with the engineering of systems that exhibit intelligence.  

- Machine and Deep Learning:

Machine-learning algorithms use statistics to find patterns in massive amounts of data. And data encompasses a lot of things - numbers, words, images, clicks, what have you. If it can be digitally stored, it can be fed into a machine-learning algorithm.

Machine learning is the process that powers many of the services we use today - recommendation systems like Netflix; search engines like Google; social-media feeds like Facebook; voice assistants like Siri; etc.. In all of these instances, each platform is collecting as much data about you as possible\ - what genres you like watching, what links you are clicking, which statuses you are reacting to - and using machine learning to make a highly educated guess about what you might want next. Or, in the case of a voice
assistant, about which words match best with the funny sounds coming out of your mouth. Frankly, this process is quite basic: find the pattern, apply the pattern. But it pretty much runs the world.

Deep learning is machine learning on steroids: it uses a technique that gives machines an enhanced ability to find -and amplify - even the smallest patterns. This technique is called a deep neural network - deep because it has many, many layers of simple computational nodes that work together to munch through data and deliver a final result in the
form of the prediction.

- Neural Networks: 

Neural networks were vaguely inspired by the inner workings of the human brain. The nodes are sort of like neurons, and the network is sort of like the brain itself. Machine (and deep) learning comes in three flavors: supervised, unsupervised, and reinforcement.

 
Kerry_Park_Seattle_WA_012115
(Kerry Park, Seattle, U.S.A. - Jeffrey M. Wang)

5. 5G and Beyond Mobile Wireless Technology


"AI, machine learning, deep learning, autonomous systems and neural networks are not just buzzwords and phrases. Increased computing power, more efficient hardware and robust software, as well as an explosion in sensor data from the Internet of Things - are fueling machine learning, and moving actionable data and intelligence towards edge devices. As AI makes devices, including smartphones and automobiles, more intelligent, mobile is becoming the key platform for enhancing all aspects of our lives, having an impact now and in the future." -- (MIT)

Mobile is the largest technology platform in human history. The next-generation wireless super-fast networks known as 5G, which will operate at vastly higher speeds and be able to handle tens of times more devices than existing 4G networks. The actual 5G radio system, known as 5G-NR, won't be compatible with 4G. But all 5G devices, initially, will need 4G because they'll lean on it to make initial connections before trading up to 5G where it's available. 4G will continue to improve with time, as well.

5G standards are not yet finalised and the most advanced services are still in the pre-commercial phase. 5G needs spectrum within three key frequency ranges to deliver widespread coverage and support all use cases. The three ranges are: Sub-1 GHz, 1-6 GHz and above 6 GHz. - Above 6 GHz is needed to meet the ultra-high broadband speeds envisioned for 5G. Players (AT&T, Verizon, ..) in the (U.S.) national wireless industry are developing their 5G networks and are working to acquire spectrum. AT&T is gearing up to launch the first standards-based 5G services in multiple U.S. markets by the end of 2018.

5G will achieve speeds of 20 gigabits per second, fast enough to download an entire Hollywood movie in a few seconds. It also will reduce latency - the measure of how long it takes a packet of data to be transmitted between two points - by a factor of 15. 5G networks will combine numerous wireless technologies, such as 4G LTE, Wi-Fi, and millimeter wave technology. 5G will also leverages cloud infrastructure, intelligent edge services and virtualized network core. 

Instead of point-to-point communications provided by legacy mobile networks, 5G will move packets of data following the most efficient path to their destination. This shift enables real time aggregation and analysis of data, moving wireless technology from communication to computing. Four factors distinguish 5G from its predecessors: connected devices, fast and intelligent networks, back-end services and extremely low latency. These qualities enable a fully connected and interactive world with a variety of new applications.

Leveraging state-of-the-art communication network architectures, 5G is touted to be the primary catalyst for next-generation Internet of Things (IoT) services. 5G will provide the backbone for IoT that greatly improves data transfer speeds and processing power over its predecessors. This combination of speed and computing power will enable new applications. These include connected cars coupled with augmented reality and virtual reality platform, smart cities and connected devices that revolutionize key industry verticals.

By 2020, the 5G network will support more than 20 billion connected devices, 212 billion connected sensors and enable access to 44 zettabytes of data gathered from a wide range of devices from smartphones to remote monitoring devices. Healthcare organizations are eager to embrace IoT devices because they save money by keeping patients out of the hospital. If IoT devices can diagnose people in advance then that saves huge costs.  

 

6. The Next Generation Internet (NGI) and Quantum Computing


- IPV6 - The Next Generation Internet:

Internet Protocol version 6 (IPv6) is the latest revision of the Internet Protocol (IP), the communications protocol that provides an identification and location system for computers on networks and routes traffic across the Internet. Every device on the Internet must be assigned an IP address in order to communicate with other devices. IPv6 contains addressing and control information to route packets for the Next Generation Internet (NGI).
 
IPv6 addresses the main problem of IPv4, that is, the exhaustion of addresses to connect computers or host in a packet-switched network. IPv6 has a very large address space and consists of 128 bits as compared to 32 bits in IPv4. Therefore, it is now possible to support  2128 unique IP addresses, a substantial increase in number of computers that can be addressed with the help of IPv6 addressing scheme. In addition, this addressing scheme will also eliminate the need of NAT (network address translation) that causes several networking problems (such as hiding multiple hosts behind pool of IP addresses) in end-to-end nature of the Internet.
 
IPv6 addresses include a scope field that identifies the type of application suitable for the address. IPv6 does not support broadcast addresses, but instead uses multicast addresses for broadcast. In addition, IPv6 defines a new type of address called anycast. The IPv6 protocol can handle packets more efficiently, improve performance and increase security. It enables internet service providers to reduce the size of their routing tables by making them more hierarchical. 
 
IPv6 builds upon the functionality and structure of IPv4 in the following ways: Provides a simplified and enhanced packet header to allow for more efficient routing; Improves support for mobile phones and other mobile computing devices; Enforces increased, mandatory data security through IPsec (which was originally designed for it); Provides more extensive quality-of-service (QoS) support.
 
IPV6 brings quality of service (QoS) that is required for several new applications such as IP telephony, video/audio, interactive games or ecommerce. Whereas IPv4 is a best effort service, IPv6 ensures QoS, a set of service requirements to deliver performance guarantee while transporting traffic over the network. For networking traffic, the quality refers to data loss, latency (jitter) or bandwidth. In order to implement QoS marking, IPv6 provides a traffic-class field (8 bits) in the IPv6 header. It also has a 20-bit flow label. This feature ensures transport layer connection survivability and allows a computer or a host to remain reachable regardless of its location in an IPv6 network and, in effect, ensures transport layer connection survivability. With the help of Mobile IPv6, even though the mobile node changes locations and addresses, the existing connections through which the mobile node is communicating are maintained. To accomplish this, connections to mobile nodes are made with a specific address that is always assigned to the mobile node, and through which the mobile node is always reachable. 

Other important features of IPv6: Stateless Auto-reconfiguration of Hosts - This feature allows IPv6 host to configure automatically when connected to a routed IPv6 network; Network-layer security - Pv6 implements network-layer encryption and authentication via IPsec.
 
Considering all these advantages of IPv6, it seems like the industry is taking a long time to migrate from IPv4 to IPv6. Part of the reason is that network address translation (NAT) helped delay the transition. NAT makes it possible to direct traffic to thousands and thousands of individual IP addresses on private networks through NAT gateways that each use up just one public IP address.
 
Most of the world “ran out” of new IPv4 addresses between 2011 and 2018 – but we won’t completely be out of them as IPv4 addresses get sold and re-used, and any leftover addresses will be used for IPv6 transitions. There’s no official switch-off date, so people shouldn’t be worried that their internet access will suddenly go away one day. As more networks transition, more content sites support IPv6 and more end users upgrade their equipment for IPv6 capabilities, the world will slowly move away from IPv4.
 
- Quantum Communication Network:

Quantum computers are on the cusp of commercialization. What is quantum computing, and what will it be capable of?
 

[World Economic Forum]: As China moves closer to building a working quantum communications network, the possibility of a quantum Internet becomes more and more real. In the simplest of terms, a quantum Internet would be one that uses quantum signals instead of radio waves to send information. The Internet as we know it uses radio frequencies to connect various computers through a global web in which electronic signals are sent back and forth. In a quantum internet, signals would be sent through a quantum network using entangled quantum particles.

Researchers have recently made significant progress in building this quantum communication network. China launched the world’s first quantum communication satellite in 2016, and they’ve since been busy testing and extending the limitations of sending entangled photons from space to ground stations on Earth and then back again. They’ve also managed to store information using quantum memory. By the end of August, 2017, the nation plans to have a working quantum communication network to boost the Beijing-Shanghai internet. Leading these efforts is Jian-Wei Pan of the University of Science and Technology of China, and he expects that a global quantum network could exist by 2030. That means a quantum internet is just 13 years away, if all goes well. 

- What is a Quantum Computers?:

[BBC]: "A quantum computer is a machine that is able to crack very tough computation problems with incredible speed - beyond that of today's "classical" computers. In conventional computers, the unit of information is called a "bit" and can have a value of either 1 or 0. But its equivalent in a quantum system - the qubit (quantum bit) - can be both 1 and 0 at the same time. This phenomenon opens the door for multiple calculations to be performed simultaneously. However, qubits need to be synchronised using a quantum effect known as entanglement, which Albert Einstein termed "spooky action at a distance". There are four types of quantum computers currently being developed, which use: Light particles; Trapped ions; Superconducting qubits; Nitrogen. vacancy centres in diamonds

Quantum computers will enable a multitude of useful applications, such as being able to model many variations of a chemical reaction to discover new medications; developing new imaging technologies for healthcare to better detect problems in the body; or to speed up how we design batteries, new materials and flexible electronics." 

- How do you build the next-generation Internet?:

[BBC]: It's not easy to develop technology for a device that hasn't technically been invented yet, but quantum communications is an attractive field of research because the technology will enable us to send messages that are much more secure.

There are several problems that will need to be solved in order to make a quantum Internet possible: getting quantum computers to talk to each other; making communications secure from hacking; transmitting messages over long distances without losing parts of the message; and routing messages across a quantum network.  

 

Hong Kong_4
(Hong Kong)

7. Building the Digtal Plaforms of the Future


The new digital technologies have led to widespread use of cloud computing, recognition of the potential of big data analytics, artificial intelligence, and significant progress in aspects of the Internet of Things, such as home automation, smart cities and grids and digital manufacturing. In addition to closing gaps in respect of the basic necessities of access and usage, now the conditions must be established for using the new platforms and finding ways to participate actively in the creation of content and even new applications and platforms.

A website is an essential element for running a successful business. A business without a website can potentially lose out on great opportunities since potential customers can’t reach you, find you and learn about you online.

- Building a Next-Gen Digital Platform to Lead in the New Digital Economy:

The enterprise world is changing faster than ever. To compete, it is now necessary to do business at an almost unprecedented size and scale. In order to achieve this scale, winning companies are establishing digital platforms that extend their organizational boundaries. With the Internet as the platform for innovation and the emergence of the information-fueled economy, technology is both a strategic requirement and a strategic advantage.
 
While the term “digital platforms” includes anything from search engines (such as Google), to social platforms (such as Facebook), all the way to IaaS providers and PaaS providers (such as AWS and Azure), digitalized business technology is becoming increasingly refined.
 
Digital platforms are virtualized, containerized, and treated like malleable, reusable resources, with workloads remaining independent from the operating environment. Systems are loosely coupled and embedded with policies, controls, and automation. Likewise, on-premises, private cloud, or public cloud capabilities can be employed dynamically to deliver any given workload at an effective price and performance point.
 
- Digital Platforms in Banking and Financial Services:

Advances in digital technology has expanded the awareness of the benefits of conducting financial transactions online or with mobile devices. At the same time, digital advances have provided access to financial services for billions of previously unserved and underserved consumers worldwide, especially in less developed economies. 

Based on current trends, digital platforms will become the preferred and dominant business model for banks and financial institutions in the future. Digital platforms offer consumers and small businesses the ability to connect to financial and other service providers through an online or mobile channel as an integrated part of their day-to-day activities.

In emerging markets, billions of people around the world without access to traditional financial services, FinTech could lead to a revolution in financial inclusion and membership in the new global digital economy. Individuals and businesses have access to useful and affordable financial products and services that meet their needs – transactions, payments, savings, credit and insurance – delivered in a responsible and sustainable way. Financial inclusion is a key enabler to reducing poverty and boosting prosperity.

With lower distribution costs and simplified engagement, the movement from paper to digital is picking up speed and increasing consumer expectations. This provides traditional financial institutions the opportunity to transform legacy delivery options, while also challenging the business case for existing physical infrastructures. 

The digitization of financial services also will improve identity management through enhanced biometrics. This will impact on the access to banking services in underserved markets and improve traditional payments and global money movement.
 
- Digital Manufacturing Platforms for Connected Smart Factories:

Digital manufacturing platforms will be fundamental for the development of industry 4.0 and connected smart factories. They play an increasing role in dealing with competitive pressures and incorporating new technologies, applications and services. Advances are needed in digital manufacturing platforms that integrate different technologies, make data from the shop floor and the supply network easily accessible, and allow for complementary applications. The challenge is to fully exploit new concepts and technologies that allow manufacturing companies, especially mid-caps, small and medium-sized enterprises (SMEs), to fulfill the demands from changing supply and value networks.

- Agricultural Digital Integration Platforms:

Digital agriculture is the use of new and advanced technologies, integrated into one system, to enable farmers and other stakeholders within the agriculture value chain to improve food production. The rise of digital agriculture and its related technologies has opened a wealth of new data opportunities. Remote sensors, satellites, and drones can gather information 24 hours per day over an entire field. These can monitor plant health, soil condition, temperature, humidity, etc. The amount of data these sensors can generate is overwhelming, and the significance of the numbers is hidden in the avalanche of that data. Companies are leveraging computer vision and deep-learning algorithms to process data captured by drones and/or software-based technology to monitor crop and soil health. Machine learning models are being developed to track and predict various environmental impacts on crop yield such as weather changes.

Needed first and foremost in this digital ecosystem is an integrating digital platform. Much like how Apple opened up the iPhone to independent application providers, The standardized digital platform will be able to provide a hub for all agtech providers to essentially sell their wares, at the same time capturing their data and integrating it into the digital platform.This integrated platform gives farmers the ability to track their operations from several different angles, from soil moisture sensing to satellite imagery to weather data, to better make predictions and decisions on how their operations are faring. The integrating platform enables and protects stakeholder access and information; automates the development and analysis of massive bodies of data; and develops, reveals, and manages the potential costs – and revenues – of these decisions. These decisions can then be quickly implemented with greater accuracy through robotics and advanced machinery, and farmers can get real-time feedback on the impact their actions.

Digital agriculture has the potential to transform the way we produce the world’s food but the approach is still very new, costs are high and the details of the long term benefits are rarely available. That means to secure its widespread adoption will require collaboration and consensus across the value chain on how to overcome these challenges.
 
- Digital Service Platforms for Rural Economies:
 
The term ‘Digital Entrepreneurship’ most commonly refers to the process of creating a new - or novel - Internet enabled/delivered business, product or service. This definition includes both startups - bringing a new digital product or service to market - but also the digital transformation of an existing business activity inside a firm or the public sector.

In the developed world, the emergence of utility-based cloud computing is shifting focus from technical barriers to the business environment challenges facing digital entrepreneurs. This shift reinforces the growing importance of implementing effective policies that foster the best climate for digital service incubation, growth and successful development. However, in many rural areas and developing countries, even basic infrastructure remains a challenge, from the hardware, the network, the content, the ICT eco-system, to the skills on both consumer and business sides.
 
Recently, AT&T is rolling out broadband connectivity across the rural and underserved location in the United States. It offers Internet connection with download speed of at least 10 Mbps and upload speed of at least 1 Mbps through Fixed Wireless Internet service. The connectivity is facilitated through a wireless tower and is routed via a fixed antenna placed on the customer’s home. This cost-effective Internet connection is arguably one of the best methods to deliver high-quality faster broadband to customers in underserved rural areas.

- Digital Platform for Cultural Heritage:
 
Cultural heritage breathes a new life with digital technologies and the Internet. Information and Communications Technology (ICT) changes the way cultural digital resources are created, disseminated, preserved and (re)used. It empowers different types of users to engage with cultural digital resources. The people have now unprecedented opportunities to access cultural material, while the institutions can reach out to broader audiences, engage new users and develop creative and accessible content for leisure and education. New technologies bring cultural heritage sites back to life, for example through web discovery interfaces representing a wealth of information from collections (archives, scientific collection, museums, art galleries, visual arts etc.) enabling their re-use and re-purposing according to users' needs and inputs.

Virtual Reality and Van Gogh Collide - technology is turning museums into a booming industry. Virtual museum (VM) is a digital entity that draws on the characteristics of a museum, in order to complement, enhance, or augment the museum through personalization, interactivity, user experience and richness of content. VM is not a real museum transposed to the web, nor an archive or a database of virtual digital assets but a provider of information on top of being an exhibition room. VM provides opportunities for people to access digital content before, during and after a visit in a range of digital ‘encounters’. VM is technologically demanding especially in terms of virtual and augmented reality and storytelling authoring tools which must covers various types of digital creations including virtual reality and 3D experiences, located online, in museums or on heritage sites. The challenge will be to give further emphasis on improving access, establishing meaningful narratives for collections and displays and story-led interpretation by the development of VM. It will also address the fundamental issues that are required to make this happen e.g. image rights, licencing and the ability of museums to support new ICT technology. Virtual Museums offer visitors the possibility to see art works residing in different places in context and experience objects or sites inaccessible to the public.

Cultural and creative industries are the economic activities of artists, arts enterprises, and cultural entrepreneurs in the production, distribution and consumption of film, literature, theatre, dance, visual arts, broadcasting, and fashion. New digital and information and communication technologies have revolutionized the industry's production process, distribution channels, and consumption modes.
 

- Digital Platforms for Interoperable and Smart Homes, Smart Buildings, Smart Environments, and Smart Grids:

Modern society is dependent on a reliable, abundant supply of energy. As our populations and cities get bigger, that demand is only set to grow. Ultimately we need smart grid technology because as the population grows the demand for electricity will only increase, but we need to cut our electricity consumption to fight global warming. Undoubtedly, new sources of power generation will be needed to meet skyrocketing world energy demand. We will need a scalable, innovative, and clean energy portfolio that meets the world’s need for reliable energy sources while considering the economic, environmental, health and climate effects of energy generation. In the mean time, the smart grid will be implemented incrementally over the next two decades as technology, pricing, policy, and regulation changes.
 
When energy production is becoming decentralised and ICT is increasingly present in homes, the integration of renewable energy sources (RES) and promotion of energy efficiency should benefit from smarter homes, buildings and appliances, as well as (the batteries in) electric vehicles. Smart homes and buildings are one crucial element because system integration and optimisation of distributed generation, storage and flexible consumption will require interoperable smart technologies installed at building level. Internet of Things (IoT) enables a seamless integration of home appliances with related home comfort and building automation services allowing to match user needs with the management of distributed energy across the grid, and to gain access to benefits from Demand Response. Novel services should lead to more comfortable, convenient and healthier living environment at lower energy costs for consumers whilst enabling an active participation of consumers in the energy system and energy markets.

- Big Data Solutions for Energy:

Tomorrow's energy grids consist of heterogeneous interconnected systems, of an increasing number of small-scale and of dispersed energy generation and consumption devices, generating huge amounts of data. The electricity sector, in particular, needs big data tools and architectures for optimized energy system management under these demanding conditions.

Digital data and analytics can reduce O&M costs by enabling predictive maintenance, which can lower the price of electricity for end users. Digital data and analytics can help achieve greater efficiencies through improved planning, improved efficiency of combustion in power plants and lower loss rates in networks, as well as better project design throughout the power system. In networks, efficiency gains can be achieved by lowering the rate of losses in the delivery of power to consumers, for example through remote monitoring that allows equipment to be operated closer to its optimal conditions, and flows and bottlenecks to be better managed by grid operators. Digital data and analytics can also reduce the frequency of unplanned outages through better monitoring and predictive maintenance, as well as limiting the duration of downtime by rapidly identifying the point of failure. This reduces costs and increases the resilience and reliability of supply.

Artificial Intelligence (AI) is making its way into all types of industries, including the energy sector, with significant growth in the use of AI to leverage big data and draw inference from very large data sets. AI is the application of machine learning for the purposes of automation and computational support of decision-making in a complex system. AI has great potential to coordinate and optimize the use of distributed energy resources, electric vehicles, and IoT. Use of AI aligns well with the current pace of change that utilities, regulators and customers expect with improvements to common utility operations including: reliability (e.g., self-healing grids, operations improvement and efficient use of renewable resources and energy storage); safety (e.g., outage prediction and outage response); cybersecurity of systems (e.g., threat detection and response); optimization (e.g., asset, maintenance, workflow and portfolio management); and enhancements for the customer experience (e.g., faster and more intuitive interactive voice response, personalization, product and service matching); etc..
 
The U.S. Capitol_IMG_0606
(The U.S. Capitol, Washington D.C., Jeff M. Wang)
 
- The Smart Hospital of the Future:
 
Smart hospitals are those that optimize, redesign or build new clinical processes, management systems and potentially even infrastructure, enabled by underlying digitized networking infrastructure of interconnected assets, to provide a valuable service or insight which was not possible or available earlier, to achieve better patient care, experience and operational efficiency. Smart hospitals rely on interconnected advanced technology and automation to improve patient care, clinician workflow, and overall efficiency. Smart hospitals utilize health ICT infrastructure technology such as mobile devices, data analytics solutions, and cloud computing. The process of transitioning ICT infrastructure to support a smart hospital can be challenging, but hospitals need to remember that the transformation must take place in stages. Not every hospital needs to become smart in a single step. Instead, the approach they need to take is to implement smart solutions, one by one, and then allow newer solutions to integrate with existing ones in the journey toward becoming smart. 
 
The smart hospital framework involves three essential layers - data, insight and access. Data is being collected even today, although not necessarily from all systems in a hospital, but is not integrated together to derive ‘smart’ insight, which can be done by feeding it in to analytics or machine learning software. This insight must be accessible to the user - a doctor, a nurse, facilities personnel or any other stakeholder, through an interface including a desktop or a smartphone or similar handheld device, to empower them to make critical decisions faster, improving their efficiency. 
 
There are three areas that any smart hospital addresses - operations, clinical tasks and patient centricity. Operational efficiency can be achieved by employing building automation systems and smart asset maintenance and management solutions, along with improving internal logistics of mobile assets, pharmaceutical, medical device, supplies and consumables inventory as well as control over people flow (staff, patients and visitors). Not only do these solutions reduce operational costs such as energy requirements, but also reduce the need for capital expenditures on mobile assets for example, by improving utilization rates of existing equipment. Patient flow bottlenecks, when addressed, improve efficiency, allowing more patients to be ‘processed’ through the system, allowing for more revenue opportunities at lower costs.  
 

8. Cybersecurity

 
[The U.S. Homeland Security]: Our daily life, economic vitality, and national security depend on a stable, safe, and resilient cyberspace. Cyberspace and its underlying infrastructure are vulnerable to a wide range of risk stemming from both physical and cyber threats and hazards. Sophisticated cyber actors and nation-states exploit vulnerabilities to steal information and money and are developing capabilities to disrupt, destroy, or threaten the delivery of essential services.

[Cisco]: Cybersecurity is the practice of protecting systems, networks, and programs from digital attacks. These attacks are usually aimed at accessing, changing, or destroying sensitive information; extorting money from users; or interrupting normal business processes. Implementing effective cybersecurity measures is particularly challenging today because there are more devices than people, and attackers are becoming more innovative.

 
[More to come ...]


 <drafted by hhw: 11/21/18>

 

 

Document Actions