splunk> Kubernetes & OpenTelemetry Foundational Workshop Guide #1

Author:

Author:

Gerry D’Costa

Title:

Staff Solutions Engineer

Location:

Vancouver, BC, Canada

Version:

2.0.4-ami

Purpose:

To learn the basic operational workings / configuration of a Kubernetes Cluster and how **Splunk’s Deployment of the OpenTelemetry Collector** can be configured to send kubernetes | application data (metrics | traces | logs) into Splunk Enterprise / Cloud & Splunks Observability Cloud Platforms.

Table Of Contents:

[TOC]

Workshop Architecture:


High-level architectural summary of what we will be delivering today:

>>>>> gd2md-html alert: inline image link here (to images/image1.png). Store image on your image server and adjust path/filename/extension if necessary.
(Back to top)(Next alert)
>>>>>

alt_text

Workshop delivery considerations

How are these workshops structured & what are we delivering today?

This Splunk Kubernetes & OpenTelemetry Workshop was constructed to cater to various participant skill levels and capabilities. All together they make up of four (4) separate workshops:

Foundational

1.0 hrs Foundational Workshop #1
  • Discovery the importance and config of the kubernetes audit log;
  • Containerize a java application as a docker image;
  • Push the docker container to a local repo;
  • Deploy the container into a local kubernetes cluster;
← delivering today
1.5 hrs Foundational Workshop #2
  • Start and configure Splunk Enterprise local instance;
  • Deploy of the Splunk otel collector into kubernetes;
  • Configure of the otel collector to collect kubernetes logs, application logs and otel collector logs into Splunk Enterprise Platform;
  • Install and deploy the jmeter load tester;
  • Learn how to use annotations in dockerfiles and the otel collector for metadata collection;

Advanced

1.5 hrs Advanced Workshop #1
  • Configure the otel collector to collect kubernetes metrics, otel collector metrics and application logs/metrics into Splunk Enterprise Platform;
  • Learn how to easily visualize metrics and traces in Splunk Enterprise Platform;
1.5 hrs Advanced Workshop #2
  • Configure the otel collector to send metrics and trace data to Splunk Observability Cloud;
  • Configure java always-on profiling collection;
  • Configure Splunk RUM to send data to Splunk Observability Cloud;

Optional prerequisites

Consider running through the [Learn Kubernetes Basics](https://kubernetes.io/docs/tutorials/kubernetes-basics/) interactive tutorial on [kubernetes.io](https://kubernetes.io/) before running this workshop.

LEARNING MOMENT:


_Approximately 5 minutes_

What are containers and Kubernetes and why should I care?

What is the need?


With society's increased adoption of powerful devices such as smartphones, tablets, and laptops comes the increasing need for new online capabilities such as app stores, e-commerce sites, and on-demand services delivered anywhere in the world.


To meet this need, software development teams had to become faster at delivering new products and services that need to be deployed, tested (live), rolled back quickly, and updated to deal with increased security issues & threat surfaces.

Monolithic vs. Microservices Application Development:


Agile software development and DevOps methodologies allowed for these capabilities but unfortunately did not translate well with traditional monolithic application development practices.

>>>>> gd2md-html alert: inline image link here (to images/image2.png). Store image on your image server and adjust path/filename/extension if necessary.
(Back to top)(Next alert)
>>>>>

alt_text

In a monolithic application, a single software issue could mean hundreds of person-hours debugging and rebuilding an entire application stack. These problems could result in downtime to the app until the issue was resolved.


The same issue with a microservice application would allow for better resiliency as each major function is broken out into containerized services. Issues could be isolated and resolved while keeping most of the application up and running.

Why do we need to understand Kubernetes? (k8s)


Software and IT Operational practitioners - or DevOps teams - found that as they built services using containers, the real power came from keeping them stateless. If a container didn’t have to worry about application consistency (aka. database) they could easily be destroyed, rebuilt and scaled up and down as necessary.


This capability meant that we needed a way to manage the orchestration of containers. Such as, if the load of a container is too high, then scale up the container and associated load balancer, or if a container fails for some reason, dynamically restart it to keep services running and resilient.


Kubernetes, initially developed by Google, became the standard for how containers were deployed, managed, and orchestrated. Kubernetes - shortened to k8s - manages how containers are run, constantly gathers statistics on their operating state, and can be configured to make decisions to ensure the app runs optimally.


The combination of using containers with Kubernetes is what we call a microservice architecture.


**DevOps teams are now in every organization rapidly building new applications to service their customers using microservice architectures using Kubernetes as their foundation. With this rapid adoption, comes the accelerated need to gain visibility into these environments for performance, security, and reliability purposes.**
  1. Getting Started:

    Approximately 5 minutes

Before you do anything else….

1. Download an SSH Client if necessary:
    * MacOS Users:
        * If you are running MacOS, you have a SSH client called “terminal” already installed and available to use for this workshop;
        * Press the <COMMAND> and SPACEBAR keys together to open your spotlight search:

>>>>> gd2md-html alert: inline image link here (to images/image3.png). Store image on your image server and adjust path/filename/extension if necessary.
(Back to top)(Next alert)
>>>>>

alt_text

* Type in “terminal”, then hit <ENTER>

>>>>> gd2md-html alert: inline image link here (to images/image4.png). Store image on your image server and adjust path/filename/extension if necessary.
(Back to top)(Next alert)
>>>>>

alt_text

* Windows Users:
        * If you are running Windows, the most common way to perform SSH activities is to use PuTTY.
        * There is an excellent set of instructions to install PuTTY on Windows located at [SSH Academy website](https://www.ssh.com/academy/ssh/putty/windows/install).

            ```

If a Linux Ubuntu 22.04 instance was provided to you by your SE / CSE or Workshop Leader as part of a workshop, use credentials provided to access your instance.

**Done? Okay let’s begin…**





2. FW#1 - Configure Docker and Kubernetes (minikube) environment:

    _Approximately 10 minutes_



## Validate minikube and kubectl applications are available



    2. Open a new Terminal and log into your Linux Ubuntu instance with the “splunk” user; 

_(use an existing Terminal session if one is already open)_

$ ssh splunk@

$ whoami

splunk

3. Test that [minikube](https://kubernetes.io/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro/#:~:text=Minikube%20is%20a%20lightweight%20Kubernetes,%2C%20macOS%2C%20and%20Windows%20systems.) is now in your path and can be run from anywhere;

_Minikube is a lightweight Kubernetes implementation that creates a VM on your local machine and deploys a simple cluster containing only one node._

$ minikube version

minikube version: v1.29.0 commit: ddac20b4b34a9c8c857fc602203b6ba2679794d3

splunk@k8host01:~$

4. Test that [kubectl](https://kubernetes.io/docs/tasks/tools/#:~:text=The%20Kubernetes%20command%2Dline%20tool,see%20the%20kubectl%20reference%20documentation.) is now in your path and can be run from anywhere;

_kubectl is a command-line tool which allows you to run commands against Kubernetes clusters. You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs._

$ kubectl version -o json

{ "clientVersion": { "major": "1", "minor": "26", "gitVersion": "v1.26.2", "gitCommit": "fc04e732bb3e7198d2fa44efa5457c7c6f8c0f5b", "gitTreeState": "clean", "buildDate": "2023-02-22T13:39:03Z", "goVersion": "go1.19.6", "compiler": "gc", "platform": "linux/amd64" }, "kustomizeVersion": "v4.5.7" } The connection to the server localhost:8080 was refused - did you specify the right host or port? splunk@k8host01:~$

_You can Ignore the message: The connection to the server localhost:8080 was refused - did you specify the right host or port?_


## 
        


## Run and test your Kubernetes / minikube environment



    5. Set the minikube driver as docker

        ```
$ minikube config set driver docker

splunk@k8host01:~$

_NOTE: In some cases, we need to manually set the minikube driver to avoid errors such as “PROVIDER_DOCKER_NOT_RUNNING”. _

6. Delete all minikube configurations

    ```

$ minikube delete

splunk@k8host01:~$

7. Build a new Kubernetes / minikube environment from scratch:

        ```
$ minikube start --no-vtx-check --driver=docker --subnet=192.168.49.0/24

😄  minikube v1.29.0 on Ubuntu 22.04 (xen/amd64)
✨  Using the docker driver based on user configuration
📌  Using Docker driver with root privileges
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
💾  Downloading Kubernetes v1.26.1 preload ...
    > preloaded-images-k8s-v18-v1...:  397.05 MiB / 397.05 MiB  100.00% 44.87 M
    > gcr.io/k8s-minikube/kicbase...:  407.18 MiB / 407.19 MiB  100.00% 26.50 M
🔥  Creating docker container (CPUs=2, Memory=2200MB) ...
🐳  Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring bridge CNI (Container Networking Interface) ...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🔎  Verifying Kubernetes components...
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

splunk@k8host01:~$

NOTE: Those familiar with minikube will know that the default subnet used by minikube is 192.168.49.0/24. However, we are explicitly defining this subnet as a means to educate workshop users that it is possible to have minikube run on a different subnet, should IP subnet conflicts occur in your custom environments.

Also note, that subnet definitions can only be performed on NEW minikube install and cannot be changed after a cluster is created.

NOTE: It may take time for the docker service to start. If you receive an error, wait a minute and try again.

8. Test that minikube environment is running:

Check minikube status

$ minikube status

minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

splunk@k8host01:~$

Check configured minikube nodes

$ kubectl get nodes

kubeconfig: Configured
NAME       STATUS   ROLES           AGE     VERSION
minikube   Ready    control-plane   2m13s   v1.26.1

splunk@k8host01:~$
9. Install a new cert manager in minikube.

    ```

$ kubectl apply -f
https://github.com/cert-manager/cert-manager/releases/download/v1.9.1/cert-manager.yaml

namespace/cert-manager created customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created serviceaccount/cert-manager-cainjector created serviceaccount/cert-manager created serviceaccount/cert-manager-webhook created configmap/cert-manager-webhook created clusterrole.rbac.authorization.k8s.io/cert-manager-cainjector created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-issuers created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificates created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-orders created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-challenges created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created clusterrole.rbac.authorization.k8s.io/cert-manager-view created clusterrole.rbac.authorization.k8s.io/cert-manager-edit created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created clusterrole.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-cainjector created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-issuers created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificates created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-orders created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-challenges created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created role.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created role.rbac.authorization.k8s.io/cert-manager:leaderelection created role.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created rolebinding.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created rolebinding.rbac.authorization.k8s.io/cert-manager:leaderelection created rolebinding.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created service/cert-manager created service/cert-manager-webhook created deployment.apps/cert-manager-cainjector created deployment.apps/cert-manager created deployment.apps/cert-manager-webhook created mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created

splunk@k8host01:~/k8s_workshop/petclinic/k8s_deploy$

## Turn on API Audit log collection


### LEARNING MOMENT

Let's talk about Kubernetes Audit log The audit log gives us an audit trail of who has accessed the k8s cluster and at what time. By default, minikube does not turn on auditing of user and Kubernetes cluster activity within the API server. Many security use cases specifically use the audit log to detect malicious actors.

The scope of this workshop We will create a basic Audit Policy that captures everything that sends all audit info to STDOUT. In general, a capture-everything policy can be massive in many production environments. These procedures are taken right from the minikube documentation here. Click on this link to review a MEDIUM post by the developer-guy that describes a more selective policy.

10. Stop your minikube environment:

        ```
$ minikube stop 

✋  Stopping node "minikube"  ...
🛑  Powering off "minikube" via SSH ...
🛑  1 node stopped.

splunk@k8host01:~$
11. Create a directory where your Audit Policy will live in your ~/.minikube directory:

Creating this directory in this location is a workaround as indicated in the documentation:

There is currently no dedicated directory to store the audit-policy.yaml file in ~/.minikube, so we’re using the etc/ssl/cert directory as a workaround substitute.

$ mkdir -p ~/.minikube/files/etc/ssl/certs
12. Create a VERY basic audit-policy.yaml file:

NOTE: This configuration is VERY verbose. The reality is that IT Ops and Security teams will be more selective of the audit events they will log and thus create more specific policies.

_Here is a more selective policy created by developer-guy from a MEDIUM post. _

$ cat <<EOF > ~/.minikube/files/etc/ssl/certs/audit-policy.yaml
# Log all requests at the Metadata level.
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: Metadata
EOF
13. Restart your Kubernetes / minikube environment with Audit Policy configurations:

The “-” for the audit-log directory tells minikube to send output to STDOUT

$ minikube start --no-vtx-check --driver=docker --subnet=192.168.49.0/24\
 --extra-config=apiserver.audit-policy-file=/etc/ssl/certs/audit-policy.yaml\
 --extra-config=apiserver.audit-log-path=-;\
 eval $(minikube -p minikube docker-env)


😄  minikube v1.29.0 on Ubuntu 22.04 (xen/amd64)
✨  Using the docker driver based on existing profile
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
🔄  Restarting existing docker container for "minikube" ...
🐳  Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
    ▪ apiserver.audit-policy-file=/etc/ssl/certs/audit-policy.yaml
    ▪ apiserver.audit-log-path=-
🔗  Configuring bridge CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

splunk@k8host01:~$
14. Verify minikube is logging APIserver audit information:

    ```

$ kubectl logs kube-apiserver-minikube -n kube-system
| grep audit.k8s.io/v1
| head -2

{"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"cdce1638-bfa1-48a1-b1bf-a3f42daf0379","stage":"RequestReceived","requestURI":"/healthz","verb":"get","user":{"username":"system:anonymous","groups":["system:unauthenticated"]},"sourceIPs":["192.168.49.1"],"userAgent":"Go-http-client/1.1","requestReceivedTimestamp":"2023-03-25T21:06:36.185223Z","stageTimestamp":"2023-03-25T21:06:36.185223Z"} {"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"cdce1638-bfa1-48a1-b1bf-a3f42daf0379","stage":"ResponseComplete","requestURI":"/healthz","verb":"get","user":{"username":"system:anonymous","groups":["system:unauthenticated"]},"sourceIPs":["192.168.49.1"],"userAgent":"Go-http-client/1.1","responseStatus":{"metadata":{},"status":"Failure","message":"forbidden: User "system:anonymous" cannot get path "/healthz"","reason":"Forbidden","details":{},"code":403},"requestReceivedTimestamp":"2023-03-25T21:06:36.185223Z","stageTimestamp":"2023-03-25T21:06:36.186041Z","annotations":{"authorization.k8s.io/decision":"forbid","authorization.k8s.io/reason":""}}

splunk@k8host01:~$

## Create a server name resolution for minikube

_Leveraging the minikube IP address will be critical later in this workshop. To make it easier, let’s create a server name resolution to the IP to make it easier to reference._



    15. Check the IP address of your minikube environment.

        ```
$ minikube ip

192.168.49.2
16. Add a new host entry into /etc/hosts to allow for name resolution of your minikube cluster for future tasks.

NOTE: running the sudo command will require you to enter in your password.

$ echo -e "192.168.49.2\tminikube" | sudo tee --append /etc/hosts
17. Test your new name resolution

    ```

$ nslookup minikube

Server: 127.0.0.53 Address: 127.0.0.53#53

Name: minikube Address: 192.168.49.2

3. FW#1 - Download and test the spring boot java-based application (PetClinic)

    _Approximately 10 minutes_



## Download PetClinic application



    18. Create a “k8s_workshop” directory for all downloaded and configured packages for this workshop;

        ```
$ mkdir ~/k8s_workshop
19. Create a “PetClinic” directory where you will save your your downloaded PetClinic app;

    ```

$ mkdir ~/k8s_workshop/petclinic

20. Change into the “petclinic” directory

        ```
$ cd ~/k8s_workshop/petclinic
21. Download the PetClinic source;

    ```

$ git clone --branch springboot3
https://github.com/spring-projects/spring-petclinic.git

Cloning into 'spring-petclinic'... remote: Enumerating objects: 9554, done. remote: Total 9554 (delta 0), reused 0 (delta 0), pack-reused 9554 Receiving objects: 100% (9554/9554), 7.72 MiB | 20.76 MiB/s, done. Resolving deltas: 100% (3620/3620), done.

splunk@k8host01:~/k8s_workshop/petclinic$

## Build the PetClinic application



    22. Change into the PetClinic application source;

        ```
$ cd spring-petclinic
23. Use MAVEN to build a new package into a “target” directory;

    ```

$ ./mvnw package

. . . Downloading from spring-milestones: https://repo.spring.io/milestone/org/iq80/snappy/snappy/0.4/snappy-0.4.jar Downloading from spring-milestones: https://repo.spring.io/milestone/org/tukaani/xz/1.9/xz-1.9.jar Downloading from central: https://repo.maven.apache.org/maven2/org/apache/maven/shared/file-management/3.1.0/file-management-3.1.0.jar Downloading from central: https://repo.maven.apache.org/maven2/org/apache/maven/maven-archiver/3.6.0/maven-archiver-3.6.0.jar Downloading from central: https://repo.maven.apache.org/maven2/org/codehaus/plexus/plexus-io/3.4.0/plexus-io-3.4.0.jar Downloading from central: https://repo.maven.apache.org/maven2/org/codehaus/plexus/plexus-archiver/4.4.0/plexus-archiver-4.4.0.jar Downloading from central: https://repo.maven.apache.org/maven2/org/iq80/snappy/snappy/0.4/snappy-0.4.jar Downloaded from central: https://repo.maven.apache.org/maven2/org/apache/maven/shared/file-management/3.1.0/file-management-3.1.0.jar (36 kB at 4.5 MB/s) Downloading from central: https://repo.maven.apache.org/maven2/org/tukaani/xz/1.9/xz-1.9.jar Downloaded from central: https://repo.maven.apache.org/maven2/org/apache/maven/maven-archiver/3.6.0/maven-archiver-3.6.0.jar (26 kB at 2.0 MB/s) Downloaded from central: https://repo.maven.apache.org/maven2/org/tukaani/xz/1.9/xz-1.9.jar (116 kB at 9.7 MB/s) Downloaded from central: https://repo.maven.apache.org/maven2/org/iq80/snappy/snappy/0.4/snappy-0.4.jar (58 kB at 2.6 MB/s) Downloaded from central: https://repo.maven.apache.org/maven2/org/codehaus/plexus/plexus-archiver/4.4.0/plexus-archiver-4.4.0.jar (211 kB at 8.4 MB/s) Downloaded from central: https://repo.maven.apache.org/maven2/org/codehaus/plexus/plexus-io/3.4.0/plexus-io-3.4.0.jar (79 kB at 3.1 MB/s) [INFO] Building jar: /home/ubuntu/k8s_workshop/petclinic/spring-petclinic/target/spring-petclinic-3.0.0-SNAPSHOT.jar [INFO] [INFO] --- spring-boot-maven-plugin:3.0.4:repackage (repackage) @ spring-petclinic --- [INFO] Replacing main artifact with repackaged archive [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 01:32 min [INFO] Finished at: 2023-03-18T00:30:31Z [INFO] ------------------------------------------------------------------------

splunk@k8host01:~/k8s_workshop/petclinic/spring-petclinic$

## Test run your newly built PetClinic application



    24. Change into the PetClinic application target build directory;

        ```
$ cd ~/k8s_workshop/petclinic/spring-petclinic/target
25. Test that your PetClinic app was built correctly by running it via the command-line;

    ```

$ java -jar ./spring-petclinic-3.0.0-SNAPSHOT.jar

|\      _,,,--,,_
         /,`.-'`'   ._  \-;;,_

_______ |,4- ) ) .;.(`'-'_ ___ __ _ ___ _______ | | '---''(/.)-'(_) | | | | | | | | | | _ | _| | | | | | || | | | __ _ _ | |_| | |_ | | | | | | | | | | \ \ \
| _| | | | | | |_| | _ | | _| \ \ \
| | | |
| | | |_| | | | | | | |
) ) ) ) || || || ||||_| |||__| / / / / ==================================================================//_//

:: Built with Spring Boot :: 3.0.4

2023-03-18T00:34:56.526Z INFO 41924 --- [ main] o.s.s.petclinic.PetClinicApplication : Starting PetClinicApplication v3.0.0-SNAPSHOT using Java 17.0.6 with PID 41924 (/home/ubuntu/k8s_workshop/petclinic/spring-petclinic/target/spring-petclinic-3.0.0-SNAPSHOT.jar started by ubuntu in /home/ubuntu/k8s_workshop/petclinic/spring-petclinic/target) 2023-03-18T00:34:56.534Z INFO 41924 --- [ main] o.s.s.petclinic.PetClinicApplication : No active profile set, falling back to 1 default profile: "default" 2023-03-18T00:34:58.987Z INFO 41924 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data JPA repositories in DEFAULT mode. 2023-03-18T00:34:59.092Z INFO 41924 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in 92 ms. Found 2 JPA repository interfaces. 2023-03-18T00:35:00.245Z INFO 41924 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http) 2023-03-18T00:35:00.274Z INFO 41924 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat] 2023-03-18T00:35:00.275Z INFO 41924 --- [ main] o.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/10.1.5] 2023-03-18T00:35:00.430Z INFO 41924 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext 2023-03-18T00:35:00.433Z INFO 41924 --- [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 3742 ms 2023-03-18T00:35:00.855Z INFO 41924 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting... 2023-03-18T00:35:01.283Z INFO 41924 --- [ main] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Added connection conn0: url=jdbc:h2:mem:6d55a15b-ae39-47d0-8d8f-6475126fd706 user=SA 2023-03-18T00:35:01.286Z INFO 41924 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Start completed. 2023-03-18T00:35:01.509Z INFO 41924 --- [ main] o.hibernate.jpa.internal.util.LogHelper : HHH000204: Processing PersistenceUnitInfo [name: default] 2023-03-18T00:35:01.585Z INFO 41924 --- [ main] org.hibernate.Version : HHH000412: Hibernate ORM core version 6.1.7.Final 2023-03-18T00:35:02.106Z INFO 41924 --- [ main] SQL dialect : HHH000400: Using dialect: org.hibernate.dialect.H2Dialect 2023-03-18T00:35:03.718Z INFO 41924 --- [ main] o.h.e.t.j.p.i.JtaPlatformInitiator : HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform] 2023-03-18T00:35:03.731Z INFO 41924 --- [ main] j.LocalContainerEntityManagerFactoryBean : Initialized JPA EntityManagerFactory for persistence unit 'default' 2023-03-18T00:35:05.975Z INFO 41924 --- [ main] o.s.b.a.e.web.EndpointLinksResolver : Exposing 13 endpoint(s) beneath base path '/actuator' 2023-03-18T00:35:06.137Z INFO 41924 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path '' 2023-03-18T00:35:06.159Z INFO 41924 --- [ main] o.s.s.petclinic.PetClinicApplication : Started PetClinicApplication in 10.421 seconds (process running for 11.275)

26. Use a web browser and enter in your external IP Address for your Ubuntu Linux host on port 8080:

_http://&lt;UBUNTU_EXTERNAL_IP>:8080/_

_Feel free to navigate through the interface to find owners, add owners, add pets and search for Veterinarians. You can also simulate an error by clicking on ERROR in the menu bar;_



<p id="gdcalert5" ><span style="color: red; font-weight: bold">>>>>>  gd2md-html alert: inline image link here (to images/image5.png). Store image on your image server and adjust path/filename/extension if necessary. </span><br>(<a href="#">Back to top</a>)(<a href="#gdcalert6">Next alert</a>)<br><span style="color: red; font-weight: bold">>>>>> </span></p>


![alt_text](images/image5.png "image_tooltip")




    27. Hit &lt;CTRL>-C to break out from running the PetClinic application;

    

4. FW#1 - Create a docker image of the PetClinic application

    _Approximately 10 minutes_



## Ensure you can see the minikube docker-daemon repo:



    28. Take a look at the available docker images;

        ```
$ docker images

REPOSITORY                                TAG       IMAGE ID       CREATED         SIZE
registry.k8s.io/kube-apiserver            v1.26.1   deb04688c4a3   8 weeks ago     134MB
registry.k8s.io/kube-scheduler            v1.26.1   655493523f60   8 weeks ago     56.3MB
registry.k8s.io/kube-controller-manager   v1.26.1   e9c08e11b07f   8 weeks ago     124MB
registry.k8s.io/kube-proxy                v1.26.1   46a6bb3c77ce   8 weeks ago     65.6MB
registry.k8s.io/etcd                      3.5.6-0   fce326961ae2   3 months ago    299MB
registry.k8s.io/pause                     3.9       e6f181688397   5 months ago    744kB
registry.k8s.io/coredns/coredns           v1.9.3    5185b96f0bec   9 months ago    48.8MB
registry.k8s.io/pause                     3.6       6270bb605e12   18 months ago   683kB
gcr.io/k8s-minikube/storage-provisioner   v5        6e38f40d628d   23 months ago   31.5MB

splunk@k8host01:~/k8s_workshop/petclinic/spring-petclinic/target$

_NOTE: If you do not see a repository list similar to below, make sure to run the following command, _

eval $(minikube -p minikube docker-env)

Then run “docker images” again;

Build and run a docker image of the PetClinic app:

29. Ensure we are in the PetClinic application target build directory;

    ```

$ cd ~/k8s_workshop/petclinic/spring-petclinic/target

splunk@k8host01:~/k8s_workshop/petclinic/spring-petclinic/target$

30. Create a docker file in the target directory:

_We’ll take full advantage of how containers work by basing our docker image on another pre-built image from the docker repo that already has JDK 17 installed. This allows us to not have to include a java run-time in our dockerfile image. _

If you are copying text from a PDF, ensure you validate that all whitespace characters are included in your paste;

If not, you will need to add them manually;

$ vi Dockerfile

-or-

$ ne Dockerfile

syntax=docker/dockerfile:1

FROM eclipse-temurin:17-jdk-jammy

WORKDIR /app

COPY * ./

CMD ["java", "-jar", "spring-petclinic-3.0.0-SNAPSHOT.jar"]

31. Build a docker image of the PetClinic app into the minikube docker image repo:

**_NOTE: There is a period at the end of the command that is required!_**

$ docker build --tag /petclinic-otel:v1 .

Sending build context to Docker daemon 59.25MB Step 1/4 : FROM eclipse-temurin:17-jdk-jammy 17-jdk-jammy: Pulling from library/eclipse-temurin 74ac377868f8: Pull complete a182a611d05b: Pull complete ad4fe29a3001: Pull complete 9d52462c5181: Pull complete Digest: sha256:c79cbdf7f1eaff691cf9c4445eb7c4111d1034945edd8ccb02a9b2e8aa086469 Status: Downloaded newer image for eclipse-temurin:17-jdk-jammy ---> 700139b9ad2f Step 2/4 : WORKDIR /app ---> Running in 125d1dad18eb Removing intermediate container 125d1dad18eb ---> f0bb81356cc3 Step 3/4 : COPY * ./ ---> 4bff46c93751 Step 4/4 : CMD ["java", "-jar", "spring-petclinic-3.0.0-SNAPSHOT.jar"] ---> Running in 413dcfa64856 Removing intermediate container 413dcfa64856 ---> 1364b526ea2e Successfully built 1364b526ea2e Successfully tagged /petclinic-otel:v1

splunk@k8host01:~/k8s_workshop/petclinic/spring-petclinic/target$

32. Confirm the new existence of your docker image in the repo:

        ```
$ docker images

REPOSITORY                                TAG            IMAGE ID       CREATED         SIZE
<YOUR_USERNAME>/petclinic-otel                         v1             1364b526ea2e   2 minutes ago   514MB
eclipse-temurin                           17-jdk-jammy   700139b9ad2f   2 days ago      455MB
registry.k8s.io/kube-apiserver            v1.26.1        deb04688c4a3   8 weeks ago     134MB
registry.k8s.io/kube-controller-manager   v1.26.1        e9c08e11b07f   8 weeks ago     124MB
registry.k8s.io/kube-scheduler            v1.26.1        655493523f60   8 weeks ago     56.3MB
registry.k8s.io/kube-proxy                v1.26.1        46a6bb3c77ce   8 weeks ago     65.6MB
registry.k8s.io/etcd                      3.5.6-0        fce326961ae2   3 months ago    299MB
registry.k8s.io/pause                     3.9            e6f181688397   5 months ago    744kB
registry.k8s.io/coredns/coredns           v1.9.3         5185b96f0bec   9 months ago    48.8MB
registry.k8s.io/pause                     3.6            6270bb605e12   18 months ago   683kB
gcr.io/k8s-minikube/storage-provisioner   v5             6e38f40d628d   23 months ago   31.5MB

splunk@k8host01:~/k8s_workshop/petclinic/spring-petclinic/target$
33. Test your docker image to see if will run correctly:

    ```

$ docker run -p 8080:8080 /petclinic-otel:v1

|\      _,,,--,,_
         /,`.-'`'   ._  \-;;,_

_______ |,4- ) ) .;.(`'-'_ ___ __ _ ___ _______ | | '---''(/.)-'(_) | | | | | | | | | | _ | _| | | | | | || | | | __ _ _ | |_| | |_ | | | | | | | | | | \ \ \
| _| | | | | | |_| | _ | | _| \ \ \
| | | |
| | | |_| | | | | | | |
) ) ) ) || || || ||||_| |||__| / / / / ==================================================================//_//

:: Built with Spring Boot :: 3.0.4

2023-03-18T20:12:38.940Z INFO 1 --- [ main] o.s.s.petclinic.PetClinicApplication : Starting PetClinicApplication v3.0.0-SNAPSHOT using Java 17.0.6 with PID 1 (/app/spring-petclinic-3.0.0-SNAPSHOT.jar started by root in /app) 2023-03-18T20:12:38.955Z INFO 1 --- [ main] o.s.s.petclinic.PetClinicApplication : No active profile set, falling back to 1 default profile: "default" 2023-03-18T20:12:41.019Z INFO 1 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data JPA repositories in DEFAULT mode. 2023-03-18T20:12:41.094Z INFO 1 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in 63 ms. Found 2 JPA repository interfaces. 2023-03-18T20:12:42.216Z INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http) 2023-03-18T20:12:42.231Z INFO 1 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat] 2023-03-18T20:12:42.232Z INFO 1 --- [ main] o.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/10.1.5] 2023-03-18T20:12:42.347Z INFO 1 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext 2023-03-18T20:12:42.351Z INFO 1 --- [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 3276 ms 2023-03-18T20:12:42.664Z INFO 1 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting... 2023-03-18T20:12:43.098Z INFO 1 --- [ main] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Added connection conn0: url=jdbc:h2:mem:d4032502-1294-43a7-85fd-b95a017e5b90 user=SA 2023-03-18T20:12:43.102Z INFO 1 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Start completed. 2023-03-18T20:12:43.365Z INFO 1 --- [ main] o.hibernate.jpa.internal.util.LogHelper : HHH000204: Processing PersistenceUnitInfo [name: default] 2023-03-18T20:12:43.440Z INFO 1 --- [ main] org.hibernate.Version : HHH000412: Hibernate ORM core version 6.1.7.Final 2023-03-18T20:12:43.906Z INFO 1 --- [ main] SQL dialect : HHH000400: Using dialect: org.hibernate.dialect.H2Dialect 2023-03-18T20:12:45.522Z INFO 1 --- [ main] o.h.e.t.j.p.i.JtaPlatformInitiator : HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform] 2023-03-18T20:12:45.546Z INFO 1 --- [ main] j.LocalContainerEntityManagerFactoryBean : Initialized JPA EntityManagerFactory for persistence unit 'default' 2023-03-18T20:12:47.657Z INFO 1 --- [ main] o.s.b.a.e.web.EndpointLinksResolver : Exposing 13 endpoint(s) beneath base path '/actuator' 2023-03-18T20:12:47.850Z INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path '' 2023-03-18T20:12:47.871Z INFO 1 --- [ main] o.s.s.petclinic.PetClinicApplication : Started PetClinicApplication in 9.857 seconds (process running for 10.659)

34. Hit &lt;CTRL>-C to break out from running the docker PetClinic application;

 


## Test connectivity to your docker image



    35. Get the IP Address of your minikube environment:

_Since we created our image in the minikube docker repo, when we run it, it executes from within the minikube environment. _

_Therefore, to connect to the app over 8080._

_We already [wrote down the IP Address of our minikube / k8s environment in a previous section](#bookmark=id.1ci93xb). If not, here’s the command again to get the IP address:_

$ minikube ip

192.168.49.2

36. Log into a new terminal or SSH window:

        ```
$ ssh splunk@<UBUNTU_EXTERNAL_IP>
37. Use a curl command in your new terminal shell to see that your application is working:

Instead of an IP address, you can use “minikube” name resolution which we performed previously.

$ curl 192.168.49.2:8080

-or-

$ curl minikube:8080

.
.
.
  </body>

      <br />
      <br />
      <div class="container">
        <div class="row">
          <div class="col-12 text-center">
            <img src="/resources/images/spring-pivotal-logo.png"
              alt="Sponsored by Pivotal" /></div>
        </div>
      </div>
    </div>
  </div>

  <script src="/webjars/bootstrap/5.2.3/dist/js/bootstrap.bundle.min.js"></script>

</body>

</html>

splunk@k8host01:~/k8s_workshop/petclinic/spring-petclinic/target$
38. Hit &lt;CTRL>-C in your old terminal shell to break out from running the PetClinic application;
  1. FW#1 - Deploy your docker image into your local Kubernetes environment

    Approximately 20 minutes

Create a Kubernetes manifest file for k8s deployment

39. Create a “k8s_deploy” directory. This is where you’ll create your k8s manifest file for the deployment of our PetClinic Application.

    ```

$ mkdir ~/k8s_workshop/petclinic/k8s_deploy

40. Change into the “petclinic/k8s_deploy” directory

        ```
$ cd ~/k8s_workshop/petclinic/k8s_deploy
41. Create a Kubernetes manifest yaml file:

A manifest file allows us to define the configuration of our Kubernetes deployments. Kubernetes as an orchestration environment offers many services such as load balancing and performance scaling of our apps which can be defined and configured in our manifest file.

_In our workshop, we will create a simple use case to deploy our pre-defined docker image and run it as a single self-contained container on a specific port in our Kubernetes environment. _

If you are copying text from a PDF, ensure you validate that all whitespace characters are included in your paste;

If not, you will need to add them manually;

Manifest files are VERY dependent on proper whitespace indentation.
$ vi <YOUR_USERNAME>-petclinic-k8s-manifest.yml

-or-

$ ne <YOUR_USERNAME>-petclinic-k8s-manifest.yml
apiVersion: apps/v1
kind: Deployment
metadata:
   name: <YOUR_USERNAME>-petclinic-otel-deployment
   labels:
      app: <YOUR_USERNAME>-petclinic-otel-app
spec:
  selector:
    matchLabels:
      app: <YOUR_USERNAME>-petclinic-otel-app
  template:
    metadata:
      labels:
        app: <YOUR_USERNAME>-petclinic-otel-app
    spec:
      containers:
      - name: <YOUR_USERNAME>-petclinic-otel-container01
        image: <YOUR_USERNAME>/petclinic-otel:v1
        ports:
        - containerPort: 8080

Deploy your PetClinic app in Kubernetes

42. Deploy your app into Kubernetes / minikube in the “default” namespace using your manifest file:

    ```

$ kubectl apply -f -petclinic-k8s-manifest.yml

deployment.apps/-petclinic-otel-deployment created

splunk@k8host01:~/k8s_workshop/petclinic/k8s_deploy$

43. View information on your deployed app:

**_NOTE: The READY column helps identify that the application is running successfully. If it was not running it would say 0/1._**

$ kubectl get deployments

NAME READY UP-TO-DATE AVAILABLE AGE -petclinic-otel-deployment 1/1 1 1 2m21s

splunk@k8host01:~/k8s_workshop/petclinic/k8s_deploy$

_We can see that in our single node, we have one pod running._

_The “get pods” command also displays a STATUS column that indicates that the container is running without errors._

$ kubectl get pods -o wide

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES -petclinic-otel-deployment-7fbc66b9b8-79ndh 1/1 Running 0 4m43s 10.244.0.3 minikube

splunk@k8host01:~/k8s_workshop/petclinic/k8s_deploy$

### LEARNING MOMENT

Let's talk about Kubernetes and Namespaces A namespace provides scope for Kubernetes resource names and is a method that Kubernetes offers us to help isolate our resources logically - especially if there are multiple teams or contributors to a microservice application. Ideally, in production, Kubernetes Administrators would likely create a namespace for our PetClinic application, and a different namespace for our Otel Collector agent. Click the following link from kubernetes.io to review more information about Kubernetes namespaces, when to use them and how to work with them. Also click on the following link to a blog post from Google on using many Kubernetes layers - including namespaces - for better security isolation practices.

The scope of this workshop To simplify our exercises in this workshop, we will be creating all our deployed resources in the "default" namespace.

## Validate the PetClinic App is running in your container



    44. View all pods running in your Kubernetes cluster:

        ```
$ kubectl get pods -o wide

NAME                                            READY   STATUS    RESTARTS   AGE     IP           NODE       NOMINATED NODE   READINESS GATES
<YOUR_USERNAME>-petclinic-otel-deployment-7fbc66b9b8-79ndh   1/1     Running   0          4m43s   10.244.0.3   minikube   <none>           <none>

splunk@k8host01:~/k8s_workshop/petclinic/k8s_deploy$
45. Login to your PetClinic Container pod:

You can use the pod name that was retrieved in the previous command, or we can use a technique in Linux shell to run a sub-command in the same command.

The sub-command finds the pod name automatically

$ kubectl exec -ti $(kubectl get pods -o go-template --template\
 '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}'\
 | grep petclinic-otel-deployment) -- bash

root@<YOUR_USERNAME>-petclinic-otel-deployment-56779665c6-xqqk5:/app#
46. Check that your java app is running:

You are now logged into the container - perform a ps -ef command to view the java app running:

$ ps -ef | grep java

root           1       0  6 19:08 ?        00:04:12 java -jar spring-petclinic-3.0.0-SNAPSHOT.jar
root         165     155  0 20:12 pts/0    00:00:00 grep --color=auto java
47. Exit from the container

    ```

$ exit

root@-petclinic-otel-deployment-56779665c6-xqqk5:/app# exit exit

splunk@k8host01:~/k8s_workshop/petclinic/k8s_deploy$

## Test connectivity using a local shell to your PetClinic app running in Kubernetes



    48. Create a service in Kubernetes to expose a port for external access:

_Access to services running on containers in Kubernetes must be exposed to allow for external access._

$ kubectl expose deployment/-petclinic-otel-deployment
--type="NodePort" --port 8080 --name -petclinic-srv

service/-petclinic-srv exposed

splunk@k8host01:~/k8s_workshop/petclinic/k8s_deploy$

49. View all services running in your Kubernetes cluster:

        ```
$ kubectl get services

NAME               TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
<YOUR_USERNAME>-petclinic-srv   NodePort    10.109.49.79   <none>        8080:30751/TCP   5s
kubernetes         ClusterIP   10.96.0.1      <none>        443/TCP          29h

splunk@k8host01:~/k8s_workshop/petclinic/k8s_deploy$
50. Test Connectivity to the PetClinic app locally:

Use a kubectl command to set an environment variable for our exposed port.

$ export NODE_PORT=$(kubectl get services/<YOUR_USERNAME>-petclinic-srv\
 -o go-template='{{(index .spec.ports 0).nodePort}}');\
 echo $NODE_PORT

Use curl to connect to our PetClinic app at the minikube IP address. (You can also manually type in the minikube IP address instead of using $(minikube ip) or use the name resolution “minikube” name.

$ curl $(minikube ip):$NODE_PORT

-or-

$ curl minikube:$NODE_PORT

-or-

$ curl 192.168.49.2:$NODE_PORT

.
.
.
          <div class="col-12 text-center">
            <img src="/resources/images/spring-pivotal-logo.png"
              alt="Sponsored by Pivotal" /></div>
        </div>
      </div>
    </div>
  </div>

  <script src="/webjars/bootstrap/5.2.3/dist/js/bootstrap.bundle.min.js"></script>

</body>

</html>

splunk@k8host01:~/k8s_workshop/petclinic/k8s_deploy$

Define a standard external exposed port

51. Delete our existing PetClinic service

    ```

$ kubectl delete service -petclinic-srv

service "-petclinic-srv" deleted

splunk@k8host01:~/k8s_workshop/petclinic/k8s_deploy$

52. Verify the service was deleted:

        ```
$ kubectl get services

NAME               TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
kubernetes         ClusterIP   10.96.0.1      <none>        443/TCP          29h

splunk@k8host01:~/k8s_workshop/petclinic/k8s_deploy$
53. Update your manifest yaml file (**with text in red**) to add an exposed service to our deployment using a standard port of 30000.

The problem with running “kubectl expose” is that we are not able to predefine the exposed port we want our application to use. To do this, we must update our manifest file to add a service with a predefined port.

_NOTE: There are three dashes at the end that MUST also be copied with the text block _

If you are copying text from a PDF, ensure you validate that all whitespace characters are included in your paste;

If not, you will need to add them manually;

Manifest files are VERY dependent on proper whitespace indentation.
$ vi <YOUR_USERNAME>-petclinic-k8s-manifest.yml

-or-

$ ne <YOUR_USERNAME>-petclinic-k8s-manifest.yml
apiVersion: v1
kind: Service
metadata:
  name: <YOUR_USERNAME>-petclinic-srv
spec:
  selector:
    app: <YOUR_USERNAME>-petclinic-otel-app
  ports:
  - protocol: TCP 
    port: 8080
    nodePort: 30000
  type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
   name: <YOUR_USERNAME>-petclinic-otel-deployment
   labels:
      app: <YOUR_USERNAME>-petclinic-otel-app
spec:
  selector:
    matchLabels:
      app: <YOUR_USERNAME>-petclinic-otel-app
  template:
    metadata:
      labels:
        app: <YOUR_USERNAME>-petclinic-otel-app
    spec:
      containers:
      - name: <YOUR_USERNAME>-petclinic-otel-container01
        image: <YOUR_USERNAME>/petclinic-otel:v1
        ports:
        - containerPort: 8080
54. Re-install our Kubernetes deployment

    ```

$ kubectl apply -f -petclinic-k8s-manifest.yml

service/-petclinic-srv created deployment.apps/-petclinic-otel-deployment unchanged

splunk@k8host01:~/k8s_workshop/petclinic/k8s_deploy$

55. View all services running in your Kubernetes cluster:

        ```
$ kubectl get services

NAME               TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
<YOUR_USERNAME>-petclinic-srv   NodePort    10.100.66.34   <none>        8080:30000/TCP   83s
kubernetes         ClusterIP   10.96.0.1      <none>        443/TCP          29h

splunk@k8host01:~/k8s_workshop/petclinic/k8s_deploy$

Test connectivity using an external web browser to your PetClinic app running in Kubernetes

Since our docker application is running from within the minikube environment, we’ll have to create a tunnel to our PetClinic service so connections from outside our Linux environment can be established.

We can use kubectl to perform this task!

NOTE: You will need to SSH to your Ubuntu Linux server using a new terminal / shell to run this command.

56. Log into a new terminal or SSH window

    ```

$ ssh splunk@

57. Create a tunnel to your PetClinic Pod using `kubectl` in a new terminal:

Start your port forwarding using kubectl: _(Note: Your command prompt will not come back)_

$ kubectl port-forward --address 0.0.0.0 service/-petclinic-srv 8080:8080

Forwarding from 0.0.0.0:8080 -> 8080

58. Now use your external IP address with the port 8080 in your local browser to connect to the PetClinic App.

http://&lt;UBUNTU_EXTERNAL_IP>:8080



<p id="gdcalert6" ><span style="color: red; font-weight: bold">>>>>>  gd2md-html alert: inline image link here (to images/image6.png). Store image on your image server and adjust path/filename/extension if necessary. </span><br>(<a href="#">Back to top</a>)(<a href="#gdcalert7">Next alert</a>)<br><span style="color: red; font-weight: bold">>>>>> </span></p>


![alt_text](images/image6.png "image_tooltip")






6. FW#1 - Workshop Conclusion

    ```

End of the Foundation Workshop #1
(Workshop 1 of 4 in the Kubernetes | Otel | Splunk series)


We hope this has provided significant value and if you have any questions on what you've learned, please reach out to your local Splunk SE.


Please proceed to Foundational Workshop #2 
(Part 2 of the workshop series)


Thank you!
  1. APPENDIX: Supplementary Material / Exercises

APPENDIX I: Workshop Technical Requirements

Kubernetes / Minikube Server Host Requirements

Linux Server

  • AWS EC2; or
  • VirtualBox VM; or
  • VMware Fusion VM; or
  • physical server

OS:

(tested)

  • Ubuntu 22.04.2 LTS; or
  • Ubuntu 18.04.6 LTS

Compute:

  • AWS EC2 - t2.large; or
  • VM - 2 x CPU

Memory:

  • 8 GB minimum** (or 4 GB is you are not installing Splunk Enterprise locally)

Storage:

  • 25 GB minimum

Networking:

  • 1 GB NIC minimum

Firewall / Security Group:

  • Inbound Ports: 22, 8080, 443, 8000, 8088
  • Outbound Ports: all traffic
Splunk Platform Requirements

Splunk Enterprise Version:

  • 9.0.x minimum

Minimum Capabilities:

  • Role to create indexes, HEC Tokens & search;
-or-

Splunk Cloud Version:

  • 9.0.x minimum

Minimum Capabilities:

  • Role to create indexes, HEC Tokens & search;
Splunk Observability Cloud (optional)

Splunk Observability Cloud

  • Trial Version

Minimum Capabilities:

  • Ability to create Ingest Access Tokens