Digital Transformation and Hybrid Multi-Cloud
I had the pleasure of attending the ONUG Spring 2018 conference last week in San Francisco, CA. The themes for ONUG Spring were definitely “Digital Transformation” and “Hybrid Multi-Cloud”.
ONUG is attempting to help the IT user community speak with one voice about the challenges they face so their vendors can build solutions that address these challenges. For Digital Transformation, this means more and more the network (and the applications running on top of it) are a critical piece of infrastructure to the business. This means systems and solutions have to be built and designed with security and redundancy in mind. Additionally, as Enterprises migrate workloads from on-premise data centers to cloud environments, new networking challenges are arising.
As always, I had to juggle work demands with sitting in on sessions. However, here are some takeaways from the sessions I was able to listen in on.
Nick Lippis, Co-Founder and Co-Chair of ONUG, kicked off the conference with a Keynote. Of note is FedEx will be hosting the Fall ONUG 2018 in New York October 22-23. Nick mentioned they are finalizing the details of potential ONUG in December in London hosted by Barclays Bank – more details on that shortly. Finally, Nick explained ONUG’s new RFP IT! focus for ONUG Spring. The idea is for the working groups within ONUG to define requirements which the IT End User community can include in their RFPs. The vendor community that is participating in the working groups can begin making sure their products and solutions are compatible. This serves the ONUG goal of giving the IT End User community a single voice for what they need from their IT vendors.
Lane Patterson, VP Global Network Infrastructure and Services at Verizon Oath, was up next. Lane spoke about Oath’s (formally Yahoo) migration from a monolith application architecture to a more distributed micro-servers approach. A lot of people feel that what FANG (Facebook, Amazon, Netflix, and Google) does not apply to Enterprise IT. Lane’s point was that even for companies not operating at that scale, there are lessons to be learned and technology that can be re-used at lower scale.
Ernest Lefner, Senior Vice President, Network Engineering at Bank of America, followed Lane. He spoke on the work BoA has done around the following:
- Policy Driven Security and Infrastructure
- API in front of Everything
- Service Mesh Mapping
Ann Sherry, Executive Director, IT Product Management & Delivery at Kaiser Permanente, closed out the Keynote with a talk entitled, From Doing Digital to Being Digital. It contained some fascinating information on the challenges Kaiser faced as they moved to a more digital approach. As an example, they are trying to get their subscribers to use their new app for their insurance card, medical records, payment, etc. However, they still run into medical providers that process credit cards using a carbon copy machine.
From the Trenches: Re-Tooling IT Operations with Machine Learning and AI
The next session I was able to participate in was a panel moderated by Steve Collins, Principal Analyst at ACG Research. Here are the participants and the key take-aways I captured of their involvement with Machine Learning and/or AI.
Bryan Larish, Director of Technology at Verizon
- Challenge was to make the network run better using ML and AI concepts.
- Hired people that understand the space and then got laser focused on specific problem statements to solve
- Still room for improvement on basic stats on data. Then jump into neural nets, deep learning, etc.
- Feeding a neural network time series data from a lot of source is very useful
Harmen Van Der Linde, Global Head of CitiManagement Tools at Citigroup
- Internal cloud infra with containers so built a data lake for events, logs, etc. Wanted to make it easier to get signal from the noise by looking at patterns and trends in that data. Event correlation.
- Tooling for monitoring and analytics
- Use some commercial tools but also OpenShift, Elasticsearch and others.
- Time series regression is very powerful for them.
- Storing data in one place makes is much easier to correlate.
Keith Shinn, SVP of Service Experience and Insights at Fidelity Investments
- Had a pretty strong event correlation engine but was based on old technology. Could take 15 mins to poll the data and churn. Migrated to a pub/sub Kafka bus architecture.
- Multiple teams collecting data so tried to consolidate and let everyone consume what they need.
- Using mostly open source tools to build this. Third party products were too expensive to deploy at all their sites.
- No standard structure to logs so hard to make machines consume it (made for humans to read).
Steve also did a nice write-up on the panel from his point of view over on Kentik’s blog site. DISCLAIMER: I am employed by Kentik for my day job.
I’m sure I missed a lot of great content at ONUG but this is the notes I was able to capture.