- Services
- Our service portfolio
We bring your digital product vision to life, from crafting real-world testable prototypes to delivering comprehensive product solutions.
- Collaboration models
Explore collaboration models customized to your specific needs: Complete nearshoring teams, Local heroes from partners with the nearshoring team, or Mixed tech teams with partners.
- Way of working
Through close collaboration with your business, we create customized solutions aligned with your specific requirements, resulting in sustainable outcomes.
- Our service portfolio
- About Us
- Who we are
We are a full-service nearshoring provider for digital software products, uniquely positioned as a high-quality partner with native-speaking local experts, perfectly aligned with your business needs.
- Meet our team
ProductDock’s experienced team proficient in modern technologies and tools, boasts 15 years of successful projects, collaborating with prominent companies.
- Our locations
We are ProductDock, a full-service nearshoring provider for digital software products, headquartered in Berlin, with engineering hubs in Lisbon, Novi Sad, Banja Luka, and Doboj.
- Why nearshoring
Elevate your business efficiently with our premium full-service software development services that blend nearshore and local expertise to support you throughout your digital product journey.
- Who we are
- Our work
- Career
- Life at ProductDock
We’re all about fostering teamwork, creativity, and empowerment within our team of over 120 incredibly talented experts in modern technologies.
- Open positions
Do you enjoy working on exciting projects and feel rewarded when those efforts are successful? If so, we’d like you to join our team.
- Candidate info guide
How we choose our crew members? We think of you as a member of our crew. We are happy to share our process with you!
- Life at ProductDock
- Newsroom
- News
Stay engaged with our most recent updates and releases, ensuring you are always up-to-date with the latest developments in the dynamic world of ProductDock.
- Events
Expand your expertise through networking with like-minded individuals and engaging in knowledge-sharing sessions at our upcoming events.
- News
- Blog
- Get in touch
VisualVest’s multi tenant, multi-talent chatbot .
About.
What does the client do? What industry?
Our client, VisualVest, is a digital asset manager. The company was founded in 2015 as a 100% subsidiary of Union Investment Group, combining the financial expertise and security of one of Germany’s leading asset management companies with the flexibility and speed of a FinTech. With VisualVest, individuals can invest their money in broadly diversified portfolios of ETFs and investment funds.
VisualVest also serves as a platform-as-a-service provider for the cooperative banking sector in Germany. Their mission is to make financial investments transparent and accessible. They have been the first so-called Robo Advisor to offer sustainable investment fund portfolios. Sustainability and social responsibility have been drivers for VisualVest since its founding.
Product.
Can you tell us about the project and how long you’ve been working on it?
We are developing a base product, called Conversational Platform, to meet various needs. We use a flow-based tool (Node-RED) to create and edit conversational flows, enabling non-developers to define guided dialogues with users. We also have a React frontend application client that receives the defined flows via a middleware (NestJS), our API, and represents them in a nice chatbot interface. User interaction tracking, rights and roles management are all handled within our middleware component’s dashboard.
Currently, the Conversational Platform supports two applications – or in a more user-centric sense, two chatbots:
1. The KuCo – Customer Coach (in German: Kunden Coach)- which was our first project developed specifically for VisualVest. It guides users through predefined questions-and-answer flows to help them understand financial topics, gain insights on how they can invest money, and assess their investment profiles. For example, the VisualVest team has created flows to help users to evaluate how sustainably they live or what they can do for their retirement provision. Read more about the KuCo in our case study. The KuCo was further developed and replaced by “FinCo”.
2. The FinCo – the Financial Coach – which is similar to the initial KuCo version but with separate conversational flows, branding and feature sets. FinCo was built as a B2C tool for bank customers who have distinct needs. This version includes customized flows aligned with the bank’s own branding and narrative, as well as more advanced functionalities like real time calculations and graphs that visualize potential savings based on different monthly savings rates.
FinCo of a bank:
Initial situation.
What was the situation like at the beginning?
Since 2020 we have worked with VisualVest first as a small innovation team, to implement and test the Chatbot prototype, the KuCo (Customer Coach), as both a closed enterprise and an OpenSource version. Read more about the initial collaboration phase in our case study.
KuCo became a key tool to reach potential customers and explain financial topics. After the prototype has been successfully tested by VisualVest’s innovation team, the next step was the transition of the Conversational Platform into a corporate version to support new applications like the FinCo. Expanding to a broader audience brought many changes, not only in terms of business requirements but also in IT security needs.
What was the initial collaboration model?
From the beginning, we have worked with VisualVest as a full nearshoring team from ProductDock Portugal, with a local, German-speaking Product Owner, Designer, and Scrum Master on the VisualVest side. At the beginning of the project, a German-speaking Agile Coach from ProductDock supported the ramp up phase and later facilitated integration into the VV team.
To build the first prototype and production version of the KuCo, we acted as an independent development team within our own IT infrastructure, using GCP for the OpenSource version and an Azure Instance for the VisualVest enterprise version. Transitioning to a more corporate setup required us to transfer our infrastructure to the VisualVest IT team.
Opportunity space.
What needs or problems did we want to solve?
There had been two main challenges:
The first one is the business challenge.
The core problem is common to many companies – the question of how to reach our customers? Financial institutions face two obstacles in this regard: the younger audience is to be reached online first, and they are unlikely to visit a bank consultant to discuss pension gaps and savings strategies.
The second challenge is recognizing customer potential. As not all of the banks’ customers can be actively managed and it is not transparent where they may still have investments, the potential can only be guessed at.
The second is the technical challenge – which has a variety of more sub-challenging aspects:
1. FinCo was not intended solely for VisualVest’s clients – it was and is supposed to be used by hundreds of banks. Once we realized this requirement, we knew that the current Conversational Platform was not mature enough. The first critical issue we needed to resolve was multi-tenancy. This, especially for the highly regulated banking industry, involved resolving numerous regulatory issues. We needed to create dedicated databases to serve more than one bank properly, separate connections for each client communication to separated tenants, as reflected in our Middleware ensuring that all data for each tenant is being kept separated in runtime. In addition, hundreds of clients (banks) have different style guides and distinct features tailored to their specific target groups and user needs. The current version had all features in one platform, but it now needed to be split. In regards to these evolved requirements it was clear we needed a complete new architecture.
2. This led to another challenge: the team set-up change. With the first version of KuCo for VisualVest, we operated in a fully autonomous working environment. However, to develop FinCo for VisualVest’s numerous clients, we needed to integrate into the VisualVest infrastructure. FinCo also required third party services to be integrated – some of them external and other internal, provided by other VisualVest development teams. Instantly our previously independent project opened up to more teams, more stakeholders, greater visibility – along with increased pressure.
3. Lastly, another subtle challenge came along with this change. For the initial applications (like KuCo), we worked mainly on the user interface and created new features. However, as soon as we started focusing on performance, scalability and third-party integrations, our work shifted from visible front-end features to mainly technical and architectural tasks hidden behind the front-end scenes. This resulted in longer wait times for our stakeholders and Product Owner, as they weren’t receiving the visible user features they desired as fast as they had been used to. Luckily, we have a strong PO with the technical expertise to understand these obstacles and explains them to the stakeholders. We also have patient stakeholders who prioritize high quality over speed, even if it means taking extra time to ensure stability.
Solution.
What did you do? What technologies did you use? How long did it take?
Business solution
The application itself solves the Business Problem: A new digital channel was created to reach out and communicate to digital-native customers. And a process has been established that any legal notices can be placed within the application and confirmed by the user.
Technical solutions
The technical solution needed to solve:
Scalability: We needed to serve hundreds of banks via a multi-tenancy concept (covering separate data processing, data storage, feature sets and stylings) and it turned out that we also needed to resolve a performance issue that arose for multi-tenants.
Scalability & multi-tenancy
We faced a complex challenge when implementing multi-tenancy in the chatbot application. To overcome it, we met together as a team in the office – a special occasion for us, we’re working fully remote in Lisbon. We brainstormed for a couple of days to determine the right approach and developed a solution that handled different connections via WebSockets, isolated DB access, and defined rights and roles. We placed a strong emphasis on planning since we wanted this to be a one-time effort, that we wouldn’t have to revisit
The implementation took us about two months, using our established tech stack: NodeJS, TypeScript, nestJS and ArangoDB.
Styling: Customized branding & landing pages
One aspect of the multi-tenancy challenge was that each bank or tenant requires its own dedicated branding of the chatbot and the landing page where the chatbot is embedded. To meet the branding requirements, we extended our Middleware Dashboard to allow other teams to easily customize CSS, JavaScript, images, and HTML for their tenant’s branding. Customization can now be accomplished either through our Middleware dashboard editor or an API endpoint that automatically creates a custom template folder.
It took us approximately 1.5 sprints to deliver this dynamic solution and it solved a big problem for DevOps. They no longer need to be asked to update and maintain the landing pages since the teams can now do this on their own.
Middleware tenant dashboard
Middleware dashboard editor:
Performance optimization
During the multi-tenancy testing, we encountered another issue. Three years ago, the platform’s ability to handle a maximum of 1000 parallel connections was sufficient. However, with the multiplication of hundreds of tenants, stress tests revealed significant problems. The CPU got too occupied, users got disconnected, messages got lost, and we recognized that approximately 1700 simultaneous client-connections is the limit, regardless of the server or system we used.
Initially, we ingressed users sequentially, causing the first one to block the second one and so on. Even storing consent per user consisting of four fields (IP, city, state, country) was too time-consuming. To address this, we needed to decouple this into worker-threads to handle multiple requests at the same time. In numbers: the baseline of 300MB-400MB of memory, was reduced to just 29MB by moving this to different threads.
Although worker-threads helped, it wasn’t enough. We needed to rethink the architecture and started a huge refactoring task that took several months. To address the performance issue, we introduced Microservices and multithreading in order not to overwhelm the CPU anymore and we added queues to be able to process messages sequentially. Queues allowed us to manage client disconnects to handle too many open connections and to reduce data loss on both client and server sides. For our queuing solution, we chose Bull and Valkey, an OpenSource fork of Redis.
We decoupled our original middleware into four distinct microservices: Platform (administration API), Tenants, Socket, and NodeRED. This shift allowed us to scale each microservice independently, and with the use of Docker and Docker Compose, the process was streamlined, almost like adjusting a few knobs. This approach provides a reliable solution, as each microservice was stateless — if one instance failed, another could seamlessly take its place with minimal disruption to the client experience.
Scaling is now done horizontally using Docker Compose and DevOps can scale it on demand.
With this new architecture, the loading time for clients accessing the chatbot was significantly reduced: from 2-4 seconds to just milliseconds. The implementation of the new architecture took about four months and due to the refactoring of the whole system it is still being improved. We plan to look into Kubernetes in future iterations, as the next step for orchestration and scaling.
One reason why this process took several months addressed the third challenge mentioned earlier: to deliver valuable and usable progress to our stakeholders, we did not solely focus on refactoring tasks, which offer limited tangible value to the users of the system. Our refactoring strategy involved continuing to develop new features alongside the migration to the new architecture. We adopted an approach where one part of the team was refactoring while the other worked on new features. To handle both development streams we used branches: a main branch and a separate refactoring branch to deliver the refactoring parts bit by bit. Additionally, it helped that we turned the system into microservices; this modular concept made it possible for us to change the communication during the refactoring to other internal VisualVest services since we needed to replace one segment only, and not all of them at the same time.
Results.
What was the outcome?
Benefits of refactoring tasks: Not everyone on our team worked on the project from the beginning, and even though the refactoring was very intense, it was a great opportunity to learn every part of the application, which made us more confident and faster as developers.
Accomplishing the architecture challenge: Considering a new architecture and most likely introducing new technologies is always a big challenge. We were not sure in the beginning that we would find the ideal solution, but VisualVest trusted us. As a team, we are incredibly proud and thankful for the freedom VisualVest gave us to pursue the best solution. It was also very interesting to learn how to improve performance issues with Valkey.
DevOps friendly scalability: We successfully made the Conversational Platform multi-tenant ready, improved its performance and created endpoints for other teams from VisualVest to manage aspects of flows and tenants. This enables other banks to effectively change details in their flows in the conversational platform through already existing administrational forms directly.
What are your lessons learned?
As a retrospective of the past four years, we learned that our initial technology stack didn’t fully meet the performance requirements at scale. While TypeScript accelerated the development of our dynamic platform, it also introduced performance limitations. When scaling to support 5,000, 10,000, 50,000, or even 100,000 users connected via sockets and exchanging messages at a rate that pushes Gbps’ network capacities, the single-threaded event loop of JavaScript struggled to keep up.
Initially, we explored using worker-threads to improve vertical scaling, and while there was some performance gain, it wasn’t proportional to the number of CPUs added. This was due to inefficiencies in Node.js when communicating with worker-threads; while everything in Node.js may seem like an object, the reality changes when data must be serialized and deserialized between the main thread and worker-threads.
At that point, we missed the simplicity and efficiency of Go’s goroutines. With Go, we could simply open a channel and launch several goroutines by adjusting a few lines of code. This approach would have been ideal for vertical scaling.
In hindsight, with the benefit of our experience, we might have chosen Go over NodeJS for this project.
Additionally, we’ve become strong advocates for in-person planning sessions, especially when tackling complex architectural changes. Our recent experience has shown us that on-site meetings lead to more focused discussion, and we’ve learned that it’s worth making time for face-to-face discussions whenever possible.