Azure – Azure SQL—Public preview updates for mid-July 2023
Public preview enhancements and updates released for Azure SQL in mid-July 2023
Read More for the details.
Public preview enhancements and updates released for Azure SQL in mid-July 2023
Read More for the details.
Editor’s note: The ETH Library aims to advance knowledge, support research, and teaching and as a trustworthy institution to make the world of yesterday, today, and tomorrow more comprehensible to users. As the central university library of ETH Zurich, the ETH Library facilitates access of its data with information from other areas of society and knowledge. In this article, Germano Giuliani explains how Apigee helps the ETH Library to make its diverse data easily discoverable, accessible, and reusable through API management in the context of Open API, opening up new value creation potentials.
The mission of the ETH Library is to ensure the access of information to employees and students of ETH Zurich and other universities worldwide, to the public, and to companies engaged in research and development. At the ETH Library, we position ourselves as a customer-centric driving force for the generation and development of knowledge and the transfer of this knowledge in society.
Libraries are caught between changing expectations of users, technological developments, budget constraints, and rising costs for content and infrastructure. To be able to cope with those challenges, libraries must reinvent themselves. One way to do this is to embrace the paradigm of openness, which includes making their core products openly available to everyone following applicable web standards. To achieve this, new modern technology stacks must be adopted and new alliances must be formed.
The exchange of data and knowledge on the internet has become commonplace. Information is obtained online, research data is processed digitally, findings are published electronically, services are more connected than ever, and people wish to gain access to said data at any given time.
Our vision at the ETH Library is to create personal workspaces in the physical and digital environments through forward-looking information services as well as tools that are developed together with the users to meet their needs. As the largest public natural science and technology library in Switzerland and a national center for natural science and engineering information, we adapt in line with the constantly changing requirements and opportunities presented by the digital library and will continue to do so in the future as well.
Our goal is to continually make the ETH Library a networked, integrated, customer-centric, and needs-oriented library of the future.
The ETH Library owns and provides access to a large amount of diverse and valuable data (more than 30 million books, series, journals, and non-book materials as well as hundreds of terabytes of digital replica or digital-born images). This data is available in the form of descriptive metadata and very extensively also in the form of binary files such as images, PDF documents, and other file formats.
The management of such data is distributed across a wide range of proprietary applications using different technology stacks as well as diverse storage solutions. The current application landscape is rather isolated. This is due on the one hand to the different business and functional requirements of the individual applications and on the other hand to the historical development processes of the software products used.
The goal of every application is to fulfill the requirements of the business case or the service processes. For this purpose, data storage is selected in such a way that it optimally serves the functionality of the software used. Further use or reuse of the data, e.g., via export functionalities or interfaces, is often not considered by design. At best, applications have interfaces through which the data can be queried in the respective application-specific format and different data structures. The data silos on which the applications are based thus make it difficult to reuse the data across different applications or to make it available in a consistent form.
To break down this silo view for customers, the ETH Library successfully introduced the approach of a single point of access for customers with the launch of the Knowledge Portal Discovery-Solution in 2010. This approach of a single point of access for customers has proven itself and is also a central architectural approach of the new cloud-based ETH Library @ swisscovery discovery solution which has replaced the Knowledge Portal in December 2020.
The Data as a Service (DaaS) initiative is intended to enable the cross-application provision of data and digital resources from the ETH Library for further processing and subsequent use by the public. This will enable a single point of access also for developers in the Open Data community who will have easy access to the data and resources of the ETH Library via an API gateway and associated developer portal.
Within the context of the DaaS initiative, it is easier to develop new, innovative products and services with the involvement of our customers, who can become co-creators of said services.
A key feature of digital transformation is the constant availability of digital services at any given time, as well as the increasing networking of services and data with one another. Interoperability via application programming interfaces (APIs) is a key foundation of digital transformation, enabling the rise of new organizational and communication relationships and the establishment of new value networks.
Open API as a concept stands for open interfaces, unfiltered and machine-readable electronic data which is accessible to everyone. Those APIs are offered free of charge for further unrestricted use by anyone.
Open API as a new distribution channel is making our previously invisible, little-known, or difficult-to-access data available as well as Linked Open Data networking functionalities findable, accessible and reusable for internal and external partners. With the establishment of a developer portal, specifically targeted at developers and the introduction of API management processes, a sustainable Open API ecosystem can be established. This will foster innovation, collaboration, and co-creation in the Open Data community.
Digital transformation is at the heart of a modern and future-oriented library. When looking at all the areas of activities that we want to innovate and advance digitally, we recognized that APIs are essential to our digital transformation. By adopting API management best practices, we can make our digital services more stable and accessible to customers and developers (inside and outside our organization), connect with our partner ecosystems more effectively, and quickly deliver new innovative services.
The ETH Library chose Apigee as the core platform for managing APIs throughout its application landscape. Herby, Google for Education and Google Cloud Partner Digital Schooling facilitated simple and affordable access to Apigee. Apigee acts as the technical foundation for integrating data into applications and connecting different types of data. The development and expansion of this platform enable our data resources to be made available via interfaces and thus enable the development of new applications as well as the expansion of existing customer-specific applications.
Apigee provides us with capabilities that are essential for connecting, extending, and integrating systems. By connecting systems, APIs also connect organizations to other organizations, organizations to their products, services to products, or products directly to other products. Apigee’s built-inDeveloper Portal and a Google Firebase hosted OpenAPI Swagger documentation for our public APIs allow a simple and frictionless onboarding process for internal and external developers. In this sense, the concept of Open Data acts as an enabler for the development of measures in the context of the Data as a Service initiative and has strategic importance for the ETH Library.
Having our API management solution in production since June 2022, we will continue to make more data available. Apigee currently provides API proxies for our Discovery API, which holds over 30 million books, image series, journals, and other materials and APIs for persons and places with linked open data enrichments from Wikidata, metagrid.ch, DNB Entityfacts, as well as beacon.findbuch by QID and GND IDs. Also ETHorama, a service providing a Google Maps based access to digitized documents from the ETH Library platforms such as E-Pics, e-rara, E-Periodica, e-manuscripta, and Research Collection is acting as a Client App to API proxies developed in Apigee.
Check out our new platform and re-use our data in your own applications! The ETH Library API Team can be contacted for questions and further information at api@library.ethz.ch.
Read More for the details.
AWS Marketplace sellers can now add additional certifications to their Vendor Insights security profiles, including PCI Data Security Standard (PCI DSS), Health Insurance Portability and Accountability Act (HIPAA), and General Data Protection Regulation (GDPR) compliance. These are in addition to already available certifications for FedRAMP, ISO 27001, and SOC 2 Type 2. AWS Marketplace Vendor Insights helps streamline the complex third-party software risk assessment process by enabling sellers to make security and compliance information available through AWS Marketplace. Buyers can more quickly discover products in AWS Marketplace that meet their security and certification standards by searching for and accessing vendor insights profiles.
Read More for the details.
Amazon Omics has added run queuing to the Omics workflow. Amazon Omics is a fully managed service that helps healthcare and life science organizations build at-scale to store, query, and analyze genomic, transcriptomic, and other omics data. With workflow run queuing, you are now able to queue up to thousands of workflow runs, and the service will process the runs at a rate defined by your service quota limits.
Read More for the details.
Today, AWS CloudFormation StackSets launches a new API ListStackInstanceResourceDrifts, and adds a new filter in ListStackInstances to improve visibility to resource and stack instance drift information. A resource or stack instance is considered drifted when it’s actual configuration differs from its expected configuration. You can now use ListStackInstanceResourceDrifts to list and filter resources in a stack instance according to drift status. Similarly, you can use the drift status filter in ListStackInstances to check for stack instance drift in a stack set. With this launch, you can access these aggregated drift information through your management or delegated administrator AWS account.
Read More for the details.
We are excited to announce the launch of 28 new proactive controls in AWS Control Tower. This launch enhances AWS Control Tower’s governance capabilities, allowing you to implement controls at scale across your multi-account AWS environments by blocking non-compliant resources before they are provisioned for services such as Amazon CloudWatch, Amazon Neptune, Amazon ElastiCache, AWS Step Functions, and Amazon DocumentDB. These new controls help you meet control objectives such as establish logging and monitoring, encrypt data at rest, or improve resiliency. To see a full list of the new controls, see the controls reference guide.
Read More for the details.
Amazon SageMaker Feature Store now makes it easier to share, discover and access feature groups across AWS accounts. This new capability promotes collaboration and minimizes duplicate work for teams involved in ML model and application development, particularly in enterprise environments with multiple accounts spanning different business units or functions.
Read More for the details.
We are excited to announce that Amazon EMR has launched two new capabilities that enhances the scaling experience for Amazon EMR on EC2 clusters: a new retry mechanism for faster scaling of your Amazon EMR on EC2 clusters running Presto or Trino; and faster scale-down of Amazon EMR on EC2 clusters by enforcing the data redundancy requirements. These capabilities are automatically enabled for clusters running Amazon EMR 6.12 or higher releases and no action is needed from your end.
Read More for the details.
Amazon DocumentDB (with MongoDB compatibility) now supports document compression using the LZ4 compression algorithm. Compressed documents in Amazon DocumentDB are up to 7x smaller than uncompressed documents. Compressed documents require less storage space and IO operations during database reads and writes, leading to lower storage and IO costs.
Read More for the details.
AWS Supply Chain Demand Planning now logs the record of actions taken by a user to AWS CloudTrail, a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. Using AWS CloudTrail, you can log, continuously monitor, retain, and respond to the record of actions taken by a user in AWS Supply Chain Demand Planning.
Read More for the details.
Starting today, you can now easily apply multiple overrides on a demand forecast, eliminating the need to create and select from multiple overrides before publishing a demand forecast. With this release, you can apply multiple overrides, enabling users to apply overrides at different products simultaneously. When you make an override, the override adjustment is aggregated upwards in the hierarchy as well as disaggregated to lower levels of the forecast hierarchy.
Read More for the details.
Amazon CloudWatch Synthetics announces a new update to Synthetics NodeJS runtime version syn-nodejs-puppeteer-5.0 and recommends that customers migrate Synthetics canaries to the latest runtime version. Runtime version syn-nodejs-puppeteer-5.0 includes updates to third-party dependency packages (Puppeteer v19.7.0 and Chromium v111.0.5563.146).
Read More for the details.
Starting today, users can experience a simplified visual layout when they access AWS Supply Chain Demand Planning from their browser. This new user interface features an enhanced onboarding experience, simplified forecast settings, a unified demand plan, and streamlined workflows for forecast overrides and demand plan finalization. This reduces the time it takes for new users to onboard and finalize forecast.
Read More for the details.
Editor’s note: This is the second of a three part series from the BCW Group, a Web3 venture studio and enterprise consulting firm serving enterprise clients who want to integrate existing products or otherwise branch into the Web3 space. Today’s blog will discuss how Hashport uses Google Cloud as the scaffold for its decentralized interoperability, data, and node operations. You can read the first post here.
Hashport is a bridging utility with the core function of facilitating the movement of digital assets between blockchain networks and the open-source Hedera public ledger. Hashport’s governance structure reflects the hybrid-decentralization of its parent network, with industry-leading organizations enlisted to support its independent validator set. The Hashport team is currently working to adapt its existing bridging modules to accommodate more enterprise-focused use cases and specialized industries, which is reflected in the creation of Hashport’s Pro feature sets. This includes integrations between bridging aggregators, custom payment channels, and the launch of an enterprise-grade, Hedera-native blockchain oracle service called Axiom.
BCW has played a critical role in Hashport’s evolution, from ideation to development, to deployment and beyond. As part of Hashport’s validator set, BCW manages daily platform operations and verifies transactions by running one of the validator nodes. By combining BCW’s Web3 acumen with Hashport’s interoperability expertise, the platform adheres to the highest of standards. The teams involved reflect the persistence of positive industry values and the discipline of an enterprise-grade program.
The foundation of Hashport’s core functionality lies in its validator set. The validators (written in GoLang) work in concert with multiple protocols to facilitate asset bridging while maintaining a shared state that tracks bridging functions, smart contract status, the Hedera Token Service (HTS), and user-generated receipts. Google Cloud is an ideal choice for simplifying the management of Hashport’s validators and core network integrations such as Ethereum and the BNB Smart Chain (BSC) blockchain network.
One critical aspect of managing validators for multiple organizations is ensuring robust security, particularly for secure key computation. Google Cloud offers a range of security options such as confidential computing using AMD EPYC processors. Although each validator makes their own management decisions according to their institutional requirements, Hashport provides guidance, suggesting best practices wherever possible.
Hashport’s functionality also extends to nodes. Google Cloud’s Blockchain Node Engine enhances the ease and efficiency of running and managing nodes on Hashport’s network, ensuring strong performance and security. This powerful tool integrates seamlessly with Google Cloud’s other features, such as Virtual Private Cloud (VPC) internal networks, Network Intelligence Center (NIC), and performance monitoring.
As the Decentralized Finance (DeFI) landscape continues to evolve, the role of cloud services becomes increasingly relevant. Google Cloud offers a robust infrastructure that fosters innovation while ensuring security, making it an ideal choice for Hashport. The combination of Google Cloud’s advanced technology and the potential of decentralized finance highlights the importance of a balanced approach when discussing the future of the industry.
Google Cloud is known for its user-friendly interface and streamlined administrative features. As a Web3 bridging solution, Hashport harnesses Google Cloud’s usability, roles, groups, and BigTable integration for efficient infrastructure management. This is particularly useful when managing multiple services and nodes owned by a diverse set of companies with individual requirements. Leveraging Google Cloud’s intuitive interface and granular access control through Identity and Access Management (IAM) lets developers seamlessly access resources and assign permissions even across organizations.
Extensive documentation and workflow assistance, including guides on billing units and crypto funding complexities, further enhance Google Cloud’s usability in the Web3 space. BigTable, a highly-scalable NoSQL database, is vital to Hashport’s data processing and storage needs. The low-latency and high-throughput capabilities of Google Cloud enables the efficient management of Web3 transactions. Google Cloud’s usability, roles, groups, and BigTable integrations all contribute to Hashport’s operational success, providing a seamless experience in managing its infrastructure. Emerging Hashport initiatives, such as the Hashport Pro platform, address the rising demand for dependable enterprise-grade Web3 solutions. To manage the increasing volume of transactions and varied data flows, a robust infrastructure like Google Cloud is needed for efficient processing.
Hashport is a decentralized interoperability platform, and the team communicates with many organizations across its supported networks and beyond. These discussions help the team to better understand the complexities and challenges of DeFi applications and the often idiosyncratic needs of enterprises and SMEs as they look to enter the Web3 space.
Over time, the Hashport team identified a specific need for enterprise-grade data oracles to extend the capabilities of building on the Hedera Public Network. This need coincided with the team’s understanding for the potential of Hashport’s validator swarm to not only secure and validate transactions for token bridging, but also to transfer generic data between Hedera services and its connected Ethereum Virtual Machine (EVM) ecosystems. This led to the establishment of Axiom, an enterprise-grade, Hedera-native data oracle service.
As an integral part of Hashport Pro, Axiom utilizes the platform’s robust validator infrastructure to ensure that secure and reliable data transmission occurs between Hedera and EVM-based platforms. Axiom compliments Hashport’s existing infrastructure to significantly improve interoperability between Hedera and other Hashport connected networks. Google Cloud’s scalability features and Google Kubernetes Engine deployments, along with the ability to scale via resource pools, will in turn help to reduce architecting time and optimize workflows among all parties as the platform is rolled out.
Google Cloud’s data management capabilities make it an ideal choice for supporting the growth of Hashport and Axiom. The reliability and scalability of Google Cloud is crucial for a decentralized oracle service which requires a robust infrastructure for precise data handling. Moreover, Hashport can leverage additional Google Cloud resources, like the Google Cloud Marketplace and Partner Interconnect, to broaden its network and engage with a wider customer and partner base.
Google Cloud’s data management, processing, and alternative channels provide a strong foundation for Hashport’s continued success. The features and functionalities that Google Cloud provides are essential to the platform’s overall growth strategy, allowing Hashport to deliver secure Web3 solutions and meet changing market demands as needed.
Read More for the details.
The new firmware analysis feature in Defender for IoT enables security teams to get deeper visibility into IoT/OT devices by providing better insights into the foundational software they are built on.
Read More for the details.
Application Gateway for Containers is the next evolution to Application Gateway’s Ingress Controller; bringing performance, scale, deployment, and feature enhancements.
Read More for the details.
Migrate your Classic VMs before September 6, 2023 to avoid service disruptions.
Read More for the details.
Have you ever wondered what it will cost to run SAP on Google Cloud? How various configurations and feature choices will affect your costs? If you’ve ever tried to estimate the cost yourself, you know how difficult it is to get initial high-level cost estimates without being able to input a lot of detailed information about the landscape.
As part of our commitment to provide easy-to-use capabilities to help you understand why Google Cloud is a cost effective cloud for SAP applications, we’ve launched SAP Cost Estimator, which is built into the Cloud Spend Estimator module of Google Cloud’s Migration Center. Read on to learn more.
The SAP Cost Estimator determines costs for SAP deployments in predefined sizes (X-small, Small, Medium, Large, X-Large), based on typical Infrastructure as a Service (Iaas) Bill of Materials for SAP applications. You can select predefined sizes for the following SAP applications, with or without a high availability/disaster recovery (HA/DR) :
S/4HANA
BW/4HANA
BusinessObjects Business Intelligence
Fiori
Business Object Data Services (BODS)
Process Orchestration (PO)
SAP NetWeaver ABAP
The predefined sizes include a default storage configuration. However, if you require additional storage, you can easily include both Block and Cloud Storage as per your requirements. You can also compare the costs for different regions after planning your primary and DR regions and see the pricing with different Committed-Use Discounts (CUDs).
The following diagram illustrates the breakdown of IaaS costs across the SAP workload landscape by Dev, QAS, Production systems and more.
The SAP Cost Estimator provides an initial cost estimate, giving valuable insights into the cost of running SAP infrastructure on Google Cloud. This serves as a prelude to running a more in-depth complimentary discovery and assessment of your IT landscape, where we’ll create a more precise migration plan and budget estimate. With this assessment, featured in Migration Center, you’ll find detailed information regarding each system in your landscape (vCPU, Memory, Storage, etc.) and a TCO report based on your migration preferences.
We’re also excited to be able to provide SAP users insights into what it might cost to run your SAP app on Google Cloud. To quote one of our customers, “The SAP Cost Estimator will accelerate the migration journey to Google Cloud for SAP Customers by having an initial understanding of what it would cost to run SAP applications on Google Cloud Infrastructure. Thanks for providing us with this functionality.”
Learn more about SAP Cost Estimator, try it out directly in the Migration Center, or read more on all the ways we support SAP on Google Cloud.
Read More for the details.
Serverless Spark is a fully-managed and serverless product on Google Cloud that lets you run Apache Spark, PySpark, SparkR, and Spark SQL batch workloads without provisioning and managing your cluster. Serverless Spark enables you to run data processing jobs using Apache Spark, including PySpark, SparkR, and Spark SQL, on your data in BigQuery with the Apache Spark SQL connector for Google BigQuery, all from within a serverless environment. As a part of the Dataproc product portfolio, Serverless Spark also supports reading and writing to your Dataproc Metastore and provides access to the Spark History Server by configuring it with a Dataproc Persistent History Server.
We’re pleased to announce a new interactive tutorial directly in the Google Cloud console that walks you through several ways to start processing your data with Serverless Spark on Google Cloud.
Below we’ll cover at a high level what you’ll learn in the tutorial, which goes much deeper than this blog.
This tutorial will take you approximately 30 minutes. A basic understanding of Apache Spark will help you understand the concepts in this tutorial. Learn more about Apache Spark in the project documentation.
Apache Spark is an open-source distributed data processing engine for large-scale Python, Java, Scala, R, or SQL datasets. It contains a more extensive set of tools in the core library for use cases such as machine learning, graph processing, structured streaming, and a pandas integration for pandas-based workloads. In addition, numerous third-party libraries extend Spark’s functionality, including sparknlp and database connectors such as the Apache Spark SQL connector for Google BigQuery. Apache Spark supports multiple table formats, including Apache Iceberg, Apache Hudi, Parquet, and Avro.
This tutorial teaches you how to read and write data from BigQuery using PySpark and Serverless Spark. The Apache Spark SQL connector for Google BigQuery is now included in the latest Serverless Spark 2.1 runtime. You can also submit jobs via the following code:
Service-level jobs, such as Serverless Spark requesting extra executors when scaling up, are captured in Cloud Logging and can be viewed in real-time or later.
The console output will be visible via the command line as the job is running but is also logged to the Dataproc Batches console.
You can also view Spark logs via a Persistent History Server set up as a Dataproc single-node cluster. Create one below.
You can include this when running Serverless Spark jobs to view Spark logs.
The Persistent History Server is available in the Batches console by clicking on the Batch ID of the job and then View Spark History Server.
Dataproc templates provide functionality for simple ETL (extract, transform, load) and ELT (extract, load, transform) jobs. Using this command line-based tool, you can move and process your data for simple and common use cases. These templates utilize Serverless Spark but do not require the user to write any Spark code. Some of these templates include:
GCStoGCSGCStoBigQueryGCStoBigtableGCStoJDBC and JDBCtoGCSHivetoBigQueryMongotoGCS and GCStoMongo
Check out the full list of templates.
The following example will use the GCStoGCS template to convert a GCS file from csv to parquet.
Check out the interactive tutorial for a more in-depth and comprehensive view of the information covered here. New customers also get Google Cloud’s $300 credit.
Learn more:
Read More for the details.
We’re excited to announce that Google Cloud Support is now available as Public Preview* in the Google Cloud mobile app.
We understand how important it is to be able to access and collaborate on Cloud Support cases. As such, we’ve made it easy for you to manage your cases via the mobile app. There is no need to log in to the mobile browser, search for a proper website, and navigate to the place where you need to be. Your Cloud Support cases are now available at your fingertips.
In the Google Cloud mobile app, simply navigate to “Cloud Support (preview)” from any screen by touching your profile picture.
With just one touch you can see the list of all your support cases and easily dive into their important details.
Not only can you easily view and manage your existing Cloud Support cases, you can also create new ones on the fly with the Google Cloud mobile app.
Seamlessly browse case comments to see the full history of the support case, and respond to the case with new comments.
The Google Cloud app is a powerful tool for managing your Google Cloud environment on the go, and it just got better. To summarize, we’ve added Cloud Support cases to our mobile app, which lets you:
Read existing conversations with the Google Cloud Support Team
Respond to Google Cloud Support cases
Create new Google Cloud Support cases
These new features are available for you to use today in Public Preview. Download the Google Cloud mobile app from Google Play or the Apple App Store to try it out. Check out the Google Cloud app documentation to learn more about what you can do on-the-go. If you have any feedback, we would love to hear from you — simply click on the “send feedback” button in the app to share your experience.
*Preview – This product or feature is covered by the Pre-GA Offerings Terms of the Google Cloud Terms of Service. Pre-GA products and features might have limited support, and changes to pre-GA products and features might not be compatible with other pre-GA versions. For more information, see the launch stage descriptions.
Read More for the details.