ENG
- Branchen
- Finanzen
Nearshore-Softwareentwicklung für den Finanzsektor – sicher, skalierbar und Compliance-gerechte Lösungen für Banking, Zahlungsverkehr und APIs.
- Einzelhandel
Softwareentwicklung für den Einzelhandel – E-Commerce, Kassensysteme, Logistik und KI-gestützte Personalisierung durch unsere Nearshore-Engineering-Teams.
- Verarbeitende Industrie
Nearshore-Softwareentwicklung für die Industrie – ERP-Systeme, IoT-Plattformen und Automatisierungstools zur Optimierung industrieller Abläufe.
- Finanzen
- Was wir tun
- Services
- Technologien
- Kooperationsmodelle
Kooperationsmodelle passend zu Ihren Bedürfnissen: Komplette Nearshoring Teams, deutschsprachige Experten vor Ort mit Nearshoring-Teams oder gemischte Teams mit unseren Partnern.
- Arbeitsweise
Durch enge Zusammenarbeit mit Ihrem Unternehmen schaffen wir maßgeschneiderte Lösungen, die auf Ihre Anforderungen abgestimmt sind und zu nachhaltigen Ergebnissen führen.
- Über uns
- Wer wir sind
Wir sind ein Full-Service Nearshoring-Anbieter für digitale Softwareprodukte, ein perfekter Partner mit deutschsprachigen Experten vor Ort, Ihre Business-Anforderungen stets im Blick
- Unser Team
Das ProductDock Team ist mit modernen Technologien und Tools vertraut und setzt seit 15 Jahren zusammen mit namhaften Firmen erfolgreiche Projekte um.
- Unsere Standorte
Wir sind ProductDock, ein Full-Service Nearshoring-Anbieter für Softwareprodukte mit Hauptsitz in Berlin und Entwicklungs-Hubs in Lissabon, Novi Sad, Banja Luka und Doboj.
- Wozu Nearshoring
Wir kombinieren Nearshore- und Fachwissen vor Ort, um Sie während Ihrer gesamten digitalen Produktreise optimal zu unterstützen. Lassen Sie uns Ihr Business gemeinsam auf das nächste digitale Level anheben.
- Wer wir sind
- Unser Leistungen
- Karriere
- Arbeiten bei ProductDock
Unser Fokus liegt auf der Förderung von Teamarbeit, Kreativität und Empowerment innerhalb unseres Teams von über 120 talentierten Tech-Experten.
- Offene Stellen
Begeistert es dich, an spannenden Projekten mitzuwirken und zu sehen, wie dein Einsatz zu erfolgreichen Ergebnissen führt? Dann bist du bei uns richtig.
- Info Guide für Kandidaten
Wie suchen wir unsere Crew-Mitglieder aus? Wir sehen dich als Teil unserer Crew und erklären gerne unseren Auswahlprozess.
- Arbeiten bei ProductDock
- Newsroom
- News
Folgen Sie unseren neuesten Updates und Veröffentlichungen, damit Sie stets über die aktuellsten Entwicklungen von ProductDock informiert sind.
- Events
Vertiefen Sie Ihr Wissen, indem Sie sich mit Gleichgesinnten vernetzen und an unseren nächsten Veranstaltungen Erfahrungen mit Experten austauschen.
- News
- Blog
- Kontakt

17. Okt. 2025 •4 minutes read
Cloud migration with Spring Boot: GCP Pub/Sub to Azure Service Bus
Jovica Zorić
Chief Technology Officer
Cloud migration isn’t just a technical project, it’s a business-critical decision driven by changing needs around cost optimization, regulatory compliance, M&A activity, or to maintain negotiating leverage with cloud vendors. However, once in the cloud, cloud-native applications using managed services create natural lock-in requiring teams to navigate unique APIs, operating models, and performance trade-offs for the chosen cloud platform.
In this article, we’ll walk through a migration approach for a Spring Boot based messaging service from Google Cloud Platform (GCP) to Microsoft Azure ( Azure ), all while keeping business logic intact and minimizing disruption for developers.
Overview
Our reference architecture follows a typical event-driven microservices pattern:
- Publisher service: Receives incoming HTTP requests and publishes events to a message broker
- Subscriber service: Consumes messages from the broker and processes them
On GCP, these applications run inside Google Kubernetes Engine (GKE) and rely on Pub/Sub for messaging. The target on Azure is Azure Kubernetes Service (AKS) with Service Bus providing the messaging layer. As we rely on Kubernetes to provide a portable runtime abstraction we will focus our migration efforts on the messaging layer rather than the entire application stack.
We also can’t overlook our local development setup. To be able to test it locally, we use Docker to simulate cloud services, running Pub/Sub Emulator for GCP and ActiveMQ to replicate Azure’s JMS integration. Here is an overview diagram for reference.
Migration strategy
Keeping the application’s core logic unchanged, our cloud-native migration strategy focuses on three important layers:
- Infrastructure: Moving workloads from GKE to AKS and migrating messaging from Pub/Sub to Service Bus.
- Application Configuration: Ensuring that both Publisher and Subscriber can connect to the correct messaging backend while avoiding rewrites of business logic.
- Developer Workflow: Keeping local testing consistent with cloud deployments to minimize friction and maintain developer productivity.
To support these layers, we leverage three complementary abstractions:
- Kubernetes provides a consistent runtime across both GCP and Azure.
- Spring Profiles enables configuration switching without modifying core logic.
- Terraform ensures infrastructure reproducibility.
Implementation details
Infrastructure as Code
It’s been said many times, but it’s always worth repeating: reliable migrations can’t be built on manual infrastructure setup, often called ClickOps. Every mouse click is an undocumented decision, a potential inconsistency, and a future failure point. Infrastructure as Code (IaC) replaces this fragility, the manual configurations, and brings order, where:
- Changes are documented and auditable through version control;
- Teams can collaborate without conflicting manual setups;
- Infrastructure can be reused across environments;
- Deployments are automated and repeatable;
- Scaling and evolving the infrastructure becomes much easier.
GCP infrastructure
We begin on GCP with a minimal Terraform example that provisions a GKE cluster and Pub/Sub resources.
resource "google_container_cluster" "primary" {<br> name = "${var.project}-gke"<br> location = var.region<br> deletion_protection = false<br> remove_default_node_pool = true<br> initial_node_count = 1<br><br> network = google_compute_network.vpc.name<br> subnetwork = google_compute_subnetwork.subnet.name<br>}<br><br>resource "google_container_node_pool" "primary_nodes" {<br> name = google_container_cluster.primary.name<br> location = var.region<br> cluster = google_container_cluster.primary.name<br> <br> node_count = var.gke_num_nodes<br><br> autoscaling {<br> min_node_count = var.general_purpose_min_node_count<br> max_node_count = var.general_purpose_max_node_count<br> }<br><br> node_config {<br> oauth_scopes = [<br> "https://www.googleapis.com/auth/logging.write",<br> "https://www.googleapis.com/auth/monitoring",<br> "https://www.googleapis.com/auth/devstorage.read_only",<br> "https://www.googleapis.com/auth/pubsub"<br> ]<br><br> labels = {<br> env = var.project<br> }<br><br> machine_type = var.general_purpose_machine_type<br> tags = ["gke-node", "${var.project}-gke"]<br> metadata = {<br> disable-legacy-endpoints = "true"<br> }<br> }<br>}<br><br>resource "google_pubsub_topic" "articles" {<br> name = "articles"<br><br> labels = {<br> env = var.project<br> }<br><br>}<br><br>resource "google_pubsub_subscription" "articles_events" {<br> name = "articles-events"<br> topic = google_pubsub_topic.articles.id<br><br> labels = {<br> env = var.project<br> }<br>}<br>
Azure infrastructure
The Azure equivalent maintains structural parity while accounting for platform differences:
resource "azurerm_servicebus_namespace" "main" {<br> location = azurerm_resource_group.rg.location<br> name = var.servicebus_namespace_name<br> resource_group_name = azurerm_resource_group.rg.name<br> sku = var.servicebus_sku<br><br> tags = var.tags<br>}<br><br>resource "azurerm_servicebus_topic" "main" {<br> name = var.servicebus_topic_name<br> namespace_id = azurerm_servicebus_namespace.main.id<br>}<br><br>resource "azurerm_servicebus_subscription" "main" {<br> name = var.servicebus_subscription_name<br> topic_id = azurerm_servicebus_topic.main.id<br> max_delivery_count = 1<br>}<br><br>resource "azurerm_user_assigned_identity" "aks" {<br> location = azurerm_resource_group.rg.location<br> name = "${var.cluster_name}-identity"<br> resource_group_name = azurerm_resource_group.rg.name<br>}<br><br>resource "azurerm_kubernetes_cluster" "main" {<br> location = azurerm_resource_group.rg.location<br> name = var.cluster_name<br> resource_group_name = azurerm_resource_group.rg.name<br> dns_prefix = var.cluster_name<br><br> default_node_pool {<br> name = "default"<br> vm_size = var.node_vm_size<br> node_count = var.node_count<br><br> upgrade_settings {<br> drain_timeout_in_minutes = 0<br> max_surge = "10%"<br> node_soak_duration_in_minutes = 0<br> }<br> }<br><br> identity {<br> type = "UserAssigned"<br> identity_ids = [azurerm_user_assigned_identity.aks.id]<br> }<br><br> tags = var.tags<br>}<br><br>resource "azurerm_role_assignment" "acr_pull" {<br> principal_id = azurerm_kubernetes_cluster.main.kubelet_identity[0].object_id<br> role_definition_name = "AcrPull"<br> scope = azurerm_container_registry.main.id<br> skip_service_principal_aad_check = true<br>}<br>
Application layer: Spring boot
The strength of Spring’s profile system lies in its ability to manage environment-specific configurations with ease. By organizing cloud-specific settings into separate profiles, it maintains a clear separation of concerns.
Kubernetes deployment configuration
Before we move to application configuration, let’s write a simple Kubernetes deployment manifest to deploy our services. It uses environment variables, among others, to activate the appropriate Spring profile.
Publisher service example:
apiVersion: apps/v1<br>kind: Deployment<br>metadata:<br> name: publisher-deployment<br>spec:<br> selector:<br> matchLabels:<br> app: publisher<br> replicas: 1<br> template:<br> metadata:<br> labels:<br> app: publisher<br> spec:<br> containers:<br> - name: publisher<br> image: "<hub>/publisher:1"<br> ports:<br> - containerPort: 8080<br> env:<br> - name: SPRING_PROFILES_ACTIVE<br> value: gcp<br> ....
Core setup (platform agnostic)
Publisher
The Publisher is implemented as a Spring Boot application that exposes an HTTP endpoint and pushes events into Pub/Sub using Spring Integration.
public record Event(String id, String name) {}<br><br>@MessagingGateway<br>public interface IntegrationGateway {<br> @Gateway(requestChannel = "articlesMessageChannel")<br> void send(Object message);<br>}<br><br>@RestController<br>public class PublisherAPI {<br> private static final Logger LOGGER = LoggerFactory.getLogger(PublisherAPI.class);<br><br> final IntegrationGateway integrationGateway;<br><br> public PublisherAPI(IntegrationGateway integrationGateway) {<br> this.integrationGateway = integrationGateway;<br> }<br><br> @PostMapping("/send")<br> public void send(@RequestBody Event message) {<br> LOGGER.info(message.toString());<br> integrationGateway.send(message);<br> }<br>}<br><br>. . . . . <br><br>
Subscriber
The Subscriber is a Spring Boot application that consumes events from Pub/Sub and handles them via defined business logic.
public record Event(String id, String name) {}<br>..... <br>
GCP specific configuration
When running on GCP (activated via gcp or dev-gcp profile):
Publisher configuration:
@Configuration<br>@Profile({"dev-gcp", "gcp"})<br>public class PubSubPublisherConfiguration {<br><br> @Bean<br> public MessageChannel articlesMessageChannel() {<br> return new PublishSubscribeChannel();<br> }<br><br> @Bean<br> public PubSubMessageConverter messageConverter() {<br> return new JacksonPubSubMessageConverter(new ObjectMapper());<br> }<br><br> @Bean<br> @ServiceActivator(inputChannel = "articlesMessageChannel")<br> public PubSubMessageHandler articlesOutboundAdapter(PubSubTemplate pubSubTemplate, PublisherProperties publisherProperties) {<br> return new PubSubMessageHandler(pubSubTemplate, publisherProperties.getArticlesTopic());<br> }<br>}<br><br>
Subscriber configuration
@Configuration<br>@Profile({"dev-gcp", "gcp"})<br>public class PubSubSubscriberConfiguration {<br> private static final Logger LOGGER = LoggerFactory.getLogger(PubSubSubscriberConfiguration.class);<br><br> @Bean<br> public MessageChannel articlesMessageChannel() {<br> return new PublishSubscribeChannel();<br> }<br><br> @Bean<br> public PubSubMessageConverter messageConverter() {<br> return new JacksonPubSubMessageConverter(new ObjectMapper());<br> }<br><br> @Bean<br> public PubSubInboundChannelAdapter listingChannelAdapter(@Qualifier("articlesMessageChannel") MessageChannel inputChannel,PubSubTemplate pubSubTemplate, SubscriberProperties properties) {<br> var adapter = new PubSubInboundChannelAdapter(pubSubTemplate, properties.getArticlesSubscription());<br> adapter.setOutputChannel(inputChannel);<br> adapter.setAckMode(AckMode.MANUAL);<br> adapter.setPayloadType(Event.class);<br> return adapter;<br> }<br><br><br> @ServiceActivator(inputChannel = "articlesMessageChannel")<br> public void consume(@Payload Event payload,<br> @Header(GcpPubSubHeaders.ORIGINAL_MESSAGE) BasicAcknowledgeablePubsubMessage message) {<br> LOGGER.info(payload.toString());<br> message.ack();<br> }<br><br>}<br>
Azure specific configuration
When running on Azure (activated via azure or dev-azure profile):
Publisher configuration
@Configuration<br>@Profile({"dev-azure", "azure"})<br>public class JMSPublisherConfiguration {<br><br> @Bean<br> public MessageChannel articlesMessageChannel() {<br> return new PublishSubscribeChannel();<br> }<br><br> @Bean<br> @ServiceActivator(inputChannel = "articlesMessageChannel")<br> public MessageHandler jmsMessageHandler(JmsTemplate jmsTemplate, PublisherProperties publisherProperties) {<br> return message -> {<br> jmsTemplate.convertAndSend(publisherProperties.getArticlesTopic(), message.getPayload());<br> };<br> }<br><br> @Bean<br> public JmsTemplate jmsTemplate(ConnectionFactory connectionFactory, ObjectMapper objectMapper,<br> ObservationRegistry observationRegistry) {<br> JmsTemplate jmsTemplate = new JmsTemplate();<br> jmsTemplate.setConnectionFactory(connectionFactory);<br> jmsTemplate.setMessageConverter(new JMSMessageConverter<>(objectMapper, Event.class));<br> jmsTemplate.setMessageIdEnabled(true);<br> jmsTemplate.setPubSubDomain(true);<br> jmsTemplate.setObservationRegistry(observationRegistry);<br> return jmsTemplate;<br> }<br>}<br><br>
Subscriber configuration
@Configuration<br>@Profile({"dev-azure", "azure"})<br>public class JMSSubscriberConfiguration {<br><br> @Bean<br> public DefaultJmsListenerContainerFactory jmsArticlesListenerContainerFactory(ConnectionFactory connectionFactory,<br> ObjectMapper objectMapper, ObservationRegistry observationRegistry) {<br> DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();<br> factory.setConnectionFactory(connectionFactory);<br> factory.setMessageConverter(new JMSMessageConverter<>(objectMapper, Event.class));<br> factory.setSessionTransacted(true);<br> factory.setPubSubDomain(true);<br> factory.setObservationRegistry(observationRegistry);<br> return factory;<br> }<br><br>}<br>
Developer experience and Local development
A migration strategy that ignores developer experience, including local cloud migration testing with Spring Boot, is doomed to fail. For our use case, engineers need to test changes locally without provisioning cloud resources. Our solution provides platform-specific emulators that maintain API compatibility. We use docker to spin up “emulator” containers depending on profile.
For GCP
docker run -it -p 8085:8085 google/cloud-sdk:530.0.0-emulators gcloud beta emulators pubsub start --host-port=0.0.0.0:8085
For Azure
Use ActiveMQ as a JMS provider that mimics Service Bus behavior locally.
docker run -d --name activemq -p 8161:8161 -p 61616:61616 -e 'ACTIVEMQ_OPTS=-Djetty.host=0.0.0.0' apache/activemq-classic:latest<br>
In the end
Cloud migrations don’t need to be lengthy, multi-quarter projects that delay feature delivery and frustrate engineering teams. By using the right abstractions, as in our use case with Kubernetes for runtime portability, Spring Profiles for configuration management, and Infrastructure as Code for reproducibility, you can carry out migrations efficiently and with minimal disruption.
More importantly, this isn’t just about moving from one cloud to another. It’s about designing adaptable systems that evolve alongside changing business priorities, whether driven by cost optimization, regulatory compliance, or strategic shifts. In our Spring Boot cloud migration from GCP Pub/Sub to Azure Service Bus, we showed how using Kubernetes, Spring Profiles, and Terraform simplifies the process, minimizes disruption, and ensures consistent behavior across environments. The real power of a well-executed migration lies in its ability to transform both technology and teams. It creates a resilient infrastructure while fostering a culture of ownership and collaboration, ensuring that your business can adapt and thrive no matter what the future holds. If you’re exploring industry-specific cases, check out our work in manufacturing software development.
Tags:Skip tags
Jovica Zorić
Chief Technology OfficerJovica is a techie with more than ten years of experience. His job has evolved throughout the years, leading to his present position as the CTO at ProductDock. He will help you to choose the right technology for your business ideas and focus on the outcomes.