Documentation generated from source code is very popular. Solutions such as Swagger are available for many different languages and frameworks. However, limitations of annotation based tools are becoming apparent. An overwhelming number of documentation annotations make for great docs but muddy the source code. Then, something changes and the docs are out of date again.
Test-driven documentation solutions, such as Spring Rest Docs, generate example snippets for requests and responses from tests ensuring both code coverage and accurate documentation. It can even fail the build when documentation becomes out of date. This session will walk through how to implement test-driven documentation solutions for groovy ecosystem technologies like Spring Boot, Grails, and Ratpack.
Machine learning has recently made enormous progress in a variety of real-world applications, such as computer vision, speech recognition and language processing. And now, Java has the libraries to help you apply these techniques to large data sets, with new Spark-based tools like deeplearning4java (DL4J).
In this workshop, you will learn the basic building blocks for deep learning: gradient descent, backpropogation, model training and evaluation. We'll cover how to build and train supervised machine learning models, give you an overview of deep learning and show you how to recognize handwritten digits. You will gain an intuition for how to develop custom models to discover new insights and untapped patterns big data.
No prior experience in machine learning is required.
Presentation: Docker Swarm or Kubernetes - Pick your framework!
Docker and Kubernetes are two very competent, and one of the most heavily used, container orchestration frameworks. This code-driven session will compare and contrast Docker Swarm and Kubernetes on the following aspects:
Creating a Couchbase cluster
Attendees will have a clear understanding of what each orchestration framework has to offer. They will also learn techniques for using these platforms effectively.
gRPC is a high performance, open source, general RPC framework that puts mobile and HTTP/2 first. gRPC is based on many years of Google's experience in building distributed systems - it is designed to be low latency, bandwidth and CPU efficient, to create massively distributed systems that span data centers, as well as power mobile apps, real-time communications, IoT devices and APIs. It's also interoperable between multiple languages.
But beyond that fact that it's more efficient than REST, we'll look into how to use gRPC's streaming API, where you can establish server-side streaming, client-side streaming, and bidirectional streaming! This allows developers to build sophisticated real-time applications with ease.
Presentation: Fast Cars, Big Data - How Streaming Can Help Formula 1
Modern cars produce data. Lots of data. And Formula 1 cars produce more than their share. I will present a working demonstration of how modern data streaming can be applied to the data acquisition and analysis problem posed by modern motorsports.
Instead of bringing multiple Formula 1 cars to the talk, I will show how we instrumented a high fidelity physics-based automotive simulator to produce realistic data from simulated cars running on the Spa-Francorchamps track. We move data from the cars, to the pits, to the engineers back at HQ.
The result is near real-time visualization and comparison of performance and a great exposition of how to move data using messaging systems like Kafka, and process data in real time with Apache Spark, then analyse data using SQL with Apache Drill.
HotSpot's optimizing Just-In-Time compiler C2 is reaching its end-of-life and it's time to look for alternatives. One very promising replacement candidate is Graal. Graal is a Java JIT compiler written in Java. It is being developed by Oracle Labs for a couple years now and has reached a state where it's viable to be that replacement. Twitter's huge distributed system tickles every corner of the JVM and is the perfect testing ground for a new JIT compiler technology like Graal. This presentation will talk about Twitter's experiences with Graal, good and bad, the bugs we found and also the wins.
A quick introduction to the Rust Language. Code comparisons between Java and Rust, compiler features, type system, environment and integrating the two languages.
- Brief History of Rust - how did the project get started, main goals, first stable releases.
- Main types and Functions - writing a very basic code and a deep comparison between Java style and Rust style. The level of verbosity is similar; however, the way you explicit the types may be different.
- Structs and Traits, contrasting them with Interfaces and Abstract Classes in Java.
- Lifetime vs GC - how can we remove objects from the memory without a GC? An interesting alternative approach from Rust to Java's GC. We will show the good and the complicated parts.
- Notes about the compiler
At JavaOne keynote this year, Mark Reinhold talked about how Java 9 was much bigger than Jigsaw. To put that in numbers - 80+ JEPs bigger! Yes, we see more presentations on Jigsaw since it brings about modularity to the once monolithic JDK. But what about those other JEPs?!
One of those "other" JEPs, is JEP 143 - 'Improve Contended Locking'. Monica will apply her performance engineering approach and talk about JEP 143 and Oracle's Studio Analyzer Performance Tool. The crux of the presentation will entail comparing performance of contended locks in JDK 9 to JDK 8.
The Java 8 release takes Java to a whole new level. Learning the new features is just the first step. The real question is how to make best use of them. There won’t be too much time in this session to introduce the new features, so you’ll need to know what lambdas and method references are in advance. Instead, the focus will be on how coding in Java 8 differs to previous Java versions, and how to avoid going too far with the new goodies. Join me for an opinionated session of best practices.
Hands-on Lab: Large Scale Data Pipelines with Kafka, Spark Streaming and Cassandra
In this workshop we will walk through building this type of large scale, mission critical data pipeline using Kafka, Spark Streaming and Cassandra. The workshop will start by looking at the individual technologies that make up the data pipeline. Then we will discuss the overall architecture of the data pipeline and how to address these core principles. Attendees will get hands on with exercises that walk through each of the pieces of the architecture.