TurinTech AI https://www.turintech.ai Mon, 03 Jun 2024 08:37:26 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 From a £10 Optimisation to £400K Savings: Artemis AI Boosts QuantLib Runtime by 32.72% in Five Clicks https://www.turintech.ai/artemis-ai-boosts-quantlib-runtime-by-32-72-in-five-clicks/ Thu, 30 May 2024 14:22:25 +0000 https://www.turintech.ai/?p=228573 Unlock Cost Savings in Your Code

Imagine saving up to £400,000 annually in compute costs with just a £10 investment in code optimisation. With Artemis AI, our GenAI-powered code optimisation platform, entire codebases can be quickly optimised for as little as £10, ensuring significant cost savings by avoiding the expenses associated with running inefficient code and spending valuable developer time to spot and fix code inefficiencies.

In this blog article, we will walk you through a recent project involving the QuantLib C++ library, where our engineer used Artemis AI to achieve a 32.72% faster runtime. Our pull request was successfully implemented, meaning all financial firms leveraging QuantLib will benefit from this optimisation.

For example, a bank spending £100,000 monthly on cloud computing resources for QuantLib-based financial applications could save up to £32,720 per month – £392,640 annually – with a mere £10 investment in achieving a 32.72% runtime improvement.

Project Overview

QuantLib is an open-source library extensively used by financial institutions for quantitative finance tasks such as modelling, trading, and risk management. It powers a wide range of financial applications, including financial software platforms, research tools, and custom applications developed by companies.

The Challenge

Slow and inefficient code in critical libraries like QuantLib can significantly impact the speed and efficiency of financial applications, reducing profitability and competitiveness for businesses.

A single codebase like QuantLib can have hundreds of thousands of lines of code. Identifying inefficiencies in such codebases is a time-consuming task for engineers. Even experienced performance engineers can take days or weeks to write and validate improved code versions, making code optimisation a cumbersome process.

Solution

With Artemis AI, one of our engineers optimised the performance of QuantLib in just five clicks and three easy steps:

  1. Code Analysis: Artemis AI’s automated code analysis feature utilised large language models (LLMs), static analysis, and custom profilers like Intel Vtune, and identified multiple performance bottlenecks in the codebase. Our engineer completed this analysis in just 2 minutes.
  2. LLM-based Code Recommendations: Our engineer selected several LLMs (e.g. ArtemisLLM, GPT-4 Turbo, and Claude Opus) on the Artemis AI platform to generate over a hundred code recommendations that could potentially boost the performance of QuantLib. Artemis AI automatically scored and validated each recommendation, helping our engineer decide on the most effective and secure code changes.
  3. Code Optimisation: Artemis AI identified the most optimal combination of code changes from 700 options. The platform also provided performance metrics (e.g., runtime, CPU usage, memory usage) for informed decision-making in implementing the code changes.

The figures below compare the runtime of the original code version with the optimised version by Artemis AI.

*The 32.72% runtime improvement was calculated by averaging the results of 20 unit test runs before and after optimisation by Artemis AI.

Benefits

  • Performance Improvements: Achieved a 32.72% runtime acceleration with one pull request.
  • Developer Productivity: Developers are freed from weeks of manual optimisation, allowing them to focus on more strategic tasks.
  • Business Impact: Faster analysis and responsiveness to financial market changes, substantial cloud cost savings, and reduced carbon emissions.

By leveraging Artemis AI, businesses can operate faster, greener, and more efficiently. Our platform’s ability to quickly optimise codebases allows your developers to focus on innovation, boosting productivity and enhancing your competitive edge.

For more insights, check out our previous blog on how financial services can leverage Artemis AI for code upgrades and refactoring to achieve significant performance improvements and cost savings.

]]>
TurinTech AI+Intel® AI Tools Frameworks: Speed up your ML development and inferencing on CPUs and GPUs https://www.turintech.ai/turintech-aiintel-ai-tools-frameworks-speed-up-your-ml-development/ Fri, 10 May 2024 09:58:10 +0000 https://www.turintech.ai/?p=228313 Introduction 

The pace of innovation in artificial intelligence is astounding. From financial forecasting to customer service chatbots, AI now plays a pivotal role across industries. However, developing high-performance AI applications remains difficult and time-consuming. Data scientists and AI developers face immense challenges in understanding advanced software and hardware architectures to maximise AI performance. The data science lifecycle itself tends to be fragmented and slow, taking months from an experimental idea to a production ready AI application. 

To overcome these challenges, TurinTech AI has partnered with Intel to make developing production-ready, high-speed ML models easier than ever. In this article, we will examine the intricacies of building performant AI applications and how TurinTech AI and Intel jointly solve the key pain points. 

Boosting ML Performance is Critical but Challenging 

Slow ML models can lead to issues such as financial loss, customer loss, and technical performance problems. In our previous article, we emphasise why fast ML predictions are crucial in business areas such as high frequency trading, customer service, and devices on the edge of networks. 

The speed and efficiency of AI largely depend on four main areas: data, infrastructure, code, and talents. However, the swift advancement in AI software and hardware poses steep learning curves for data scientists and AI developers. Mastering new optimised libraries, frameworks and hardware architectures demands substantial time and effort. Many data scientists find their skills quickly outdated as companies rapidly adopt the latest innovations. Continuing learning is vital yet difficult alongside full-time roles. Specialisation for diverse hardware like CPUs, GPUs and TPUs also hinders versatility. 

Furthermore, the traditional data science lifecycle, marked by lengthy periods of data preparation, model development, and testing, often extending over weeks or even months, significantly slows down the delivery of AI solutions. This not only hampers the speed of innovation but can also lead to missed opportunities as models risk becoming outdated before deployment. 

All these complexities together make the task of improving ML performance a big challenge. This highlights the need for a streamlined way of developing high-performance machine learning for target hardware. 

TurinTech AI and Intel Join Forces to Speed Up your ML Development and Inferencing 

Intel® AI Tools 

As AI models evolve more rapidly than hardware advancements, significant performance boosts from software AI accelerators become crucial. If hardware acceleration is updating to a bike, software acceleration is upgrading to a supersonic jet. 

Intel’s oneAPI AI Toolsgive data scientists, AI developers, and researchers familiar Python* tools and frameworks to accelerate end-to-end data science and analytics pipelines on Intel® architecture. These Intel optimised ML frameworks and libraries, for example, the Intel® Distribution for Python, Intel® Extension for Scikit-learn, Intel® oneAPI Data Analytics Library, help increase their performance by orders of magnitude on Intel platforms designed to deliver drop-in 10-100X software AI acceleration. 

TurinTech’s evoML platform  

TurinTech’s evoML platform uses GenAI to speed up the process of developing and deploying production-quality ML model code. It automates the end-to-end data science lifecycle, including automatic data pre-processing, hyperparameter tuning, parallel model training, multi-objective optimisation for target hardware, and automatic ML code review and deployment.  

Harnessing GenAI, evoML supercharges data science workflows with advanced capabilities. For example, Synthetic Data Generation augments training data using realistic datasets, ensuring robust model performance, while Intelligent Model Selection suggests optimal machine learning models based on data characteristics, streamlining the ML process for the best results. 

TurinTech AI+ Intel 

TurinTech AI is a proud member of Intel® Partner Alliance. Together, TurinTech’s evoML platform and Intel® AI Tools offer a range of significant benefits for data scientists and AI developers.

  • Accelerated ML Lifecycle: Streamline the end-to-end machine learning process, reducing the time spent on manual data preparation and model training from weeks to days, allowing more time for ideation and experimentation. 
  • GenAI-empowered Data Science: Effortlessly tackle complex data science tasks with GenAI, from synthetic data generation and feature engineering to model selection and explanation, delivering high-performing models in less time. 
  • Seamless Integration: Access Intel’s optimised ML and LLM libraries seamlessly within the evoML platform, eliminating the need to switch between different tools and environments. 
  • Flexibility and Full Control: Tailor the ML workflow to your specific needs with flexible automation options, and gain full control by downloading the source code for further customisation. 
  • Enhanced Performance: Harness the power of Intel hardware (e.g., Intel® Gaudi 3 AI Accelerator) and software optimisations to achieve optimal AI performance, without the need to learn new tools or make significant code changes. 

Now, let’s explore two case studies that demonstrate the performance improvements achieved by leveraging evoML and Intel in different domains. 

Case Study 1: Accelerating Stock Prediction: evoML and Intel® Distribution for Python Achieves Faster Inference and Speedup in Model Training 

Challenge 

Achieving optimal performance for an AI-based stock prediction model is crucial for timely and accurate forecasting. However, the computational demands of such models can be intensive, leading to latency and throughput bottlenecks. 

Solution 

AI and ML developers in Financial Services can leverage TurinTech AI’s evoML platform to quickly build stock prediction models from their raw data. EvoML seamlessly integrates the Intel® Distribution for Python* and oneAPI AI Tools, allowing you to harness Intel’s powerful hardware capabilities without modifying your code. 

The XGBoost Classifier, a critical component of time series classification model, has been optimised by utilising the Intel®  Python* API (daal4py) for Intel® oneAPI Data Analytics Library (oneDAL) to leverage the capabilities of Intel’s 3rd generation Intel® Xeon® Platinum 8380 CPU. This optimisation ensures that stock prediction model can efficiently utilise the underlying hardware resources. 

 Benefits

  • Hardware acceleration with Intel’s CPU architecture for maximum performance. 
  • Rapid deployment of high-performance AI trading solutions for a competitive edge.
     

Case Study 2: Supercharging Machine Learning with evoML and Intel® Extension for Scikit-learn: Faster Predictions, Higher Accuracy, Better Precision 

Challenge 

While Scikit-learn offers a wide range of machine learning algorithms and a user-friendly interface, its computational demands can lead to performance bottlenecks, hindering the timeliness of insights and compromising model accuracy. 

Solution 

Data scientists can leverage the power of TurinTech AI’s evoML platform and Intel’s Extension for Scikit-learn to accelerate your Scikit-learn workflows. This solution allows you to dynamically patch Scikit-learn estimators with mathematically equivalent but accelerated versions, powered by Intel’s AI Tools. By doing so, you can benefit from hardware acceleration on Intel CPUs and GPUs across single-node and multi-node configurations, without modifying your existing code. 

About Intel® AI Tools 

Intel® AI Tools is an open, standards-based programming system designed to simplify development for data-centric workloads across various architectures like CPUs, GPUs, FPGAs, and more. It offers a unified approach, eliminating the need for separate codebases and tools for each architecture. 

About Intel® Partner Alliance 

Intel® Partner Alliance membership gives you unique business-building opportunities, like entry to the Intel Partner Showcase, advanced training, and promotional support—all tailored to your needs. 

About TurinTech AI 

TurinTech AI is the leader in code optimisation for machine learning and data-intensive applications. Its flagship products, evoML and Artemis AI, help companies easily unlock the full potential of their code and data through Generative AI. Clients benefit from operating faster while also being more sustainable and more efficient.
 

 

]]>
Three Ways Financial Services can Leverage Artemis AI https://www.turintech.ai/three-ways-financial-services-can-leverage-artemis-ai/ Thu, 02 May 2024 11:25:50 +0000 https://www.turintech.ai/?p=228197 The financial industry faces unique challenges in software management, including dealing with legacy systems, maintaining code security, and ensuring optimal performance. Artemis AI addresses these challenges head-on, offering a suite of tools designed to transform how financial institutions develop and maintain their codebases.

The Challenges of Managing Financial Software

  • Overcoming legacy system limitations
  • Ensuring code security and compliance
  • Optimising performance
  • Simplifying code management and debugging processes

Artemis AI

Artemis AI harnesses the power of GenAI combined with proprietary genetic optimisation techniques to offer a suite of capabilities to refactor, upgrade, and optimise code underlying financial applications.

1. Enhancing Financial Software Efficiency and Security

Problem:
Financial institutions often struggle with legacy software systems written in outdated programming languages. These systems are hard to maintain, prone to security vulnerabilities, and not optimised for current technological standards. Maintaining and upgrading these systems is both time-consuming and costly.

Solution:
Artemis AI’s Code Upgrade functionality allows financial institutions to easily update their code to the latest language version. This feature enables a seamless transformation into the latest features of a coding language, enhancing maintainability and compatibility with current technologies. Code Upgrade ensures that new inefficiencies are not introduced into upgraded codebases, ensuring they run at optimal performance.

Benefits:

  • Code resilience: With Artemis AI and its code upgrade feature, users can easily update their codebases to the latest releases of languages and libraries. This improves code performance while addressing any vulnerabilities, ultimately increasing the resilience of code bases.
  • Code Debugging: Post-upgrade, Artemis AI can analyse error logs and fix bugs that emerge from the upgrade process, ensuring a smooth transition.
  • Cost and Time Efficiency: By automating the upgrade process, financial institutions save valuable time and resources, allowing them to focus on core business activities.

2. Streamlined Code Management

Problem:
Financial analytics tools require constant updates and improvements to handle large volumes of transactional data efficiently. However, finding and rectifying inefficiencies or bugs in a vast code repository is a daunting task for developers, often leading to delayed updates and decreased performance.

Solution:
Artemis AI’s code search and chat features enable developers to quickly find specific segments within a vast code repository and interact with the codebase efficiently through a chat interface. Furthermore, it also allows developers to ask and get answers to general questions about coding from the web without leaving Artemis AI. This capability is complemented by Artemis AI’s code optimisation, which identifies and rectifies inefficiencies at scale.

Benefits:

  • Enhanced Productivity: Developers can swiftly locate and interact with the necessary code segments, significantly reducing the time spent searching for inefficiencies and refactoring code.
  • Optimised Performance: Artemis AI optimises code performance to ensure that the financial analytics platforms run efficiently, handling large data sets without performance lags.
  • Improved Code Quality: Regular debugging and security checks maintain high standards of code quality, crucial for sensitive financial data processing.

3. Robust Security for Financial Transaction Systems

Problem:
Financial transaction systems are prime targets for cyber-attacks. Ensuring the security of these systems is paramount, but identifying and fixing security loopholes in a large and complex codebase is challenging.

Solution:
Artemis AI’s Code Refactoring and Code Upgrade functionalities offer a proactive approach by continuously scanning the codebase for potential security vulnerabilities and suggesting necessary code changes to fortify these systems. Artemis AI also has built-in testing options such as unit tests and compilation tests to strengthen the reliability and validity of the code changes and the codebase.

Benefits:

  • Enhanced Security: Continuous monitoring and updating of the codebase significantly reduces the risk of cyber-attacks.
  • Operational Reliability: Automated refactoring ensures that the financial transaction systems are always robust, maintaining trust and reliability for users.
  • Cost-Effective Security Maintenance: Automating the security checks and debugging processes reduces the need for extensive manual oversight, leading to cost savings.

As financial software challenges evolve, solutions like Artemis AI empower institutions to streamline code management, improve security, and optimise performance. By leveraging GenAI and genetic optimisation, Artemis AI offers a comprehensive suite to tackle financial software complexities, enabling institutions to maintain a competitive edge.

]]>
TurinTech AI to Showcase Code Optimisation Solutions at Retail Technology Show 2024 https://www.turintech.ai/turintech-ai-to-showcase-code-optimisation-solutions-at-retail-technology-show-2024/ Thu, 11 Apr 2024 10:18:24 +0000 https://www.turintech.ai/?p=227915 11 April, 2024 – LONDON – TurinTech AI, a leader in AI-powered code optimisation, has announced it is exhibiting at Retail Technology Show 2024, the flagship event for retail. 

Retail Technology Show is the event that brings together Europe’s most forward-thinking retailers and leading tech innovators, where retailers can fast-forward their digital transformation strategies and empower their businesses to thrive, survive and disrupt, powered by technological advancements and innovation. Taking place at London’s Olympia on 24-25 April 2024, the Retail Technology Show’s mission is to drive the industry forward through innovation, by bringing together the brightest minds in retail and future-forward technology providers.

TurinTech AI will be showcasing its two optimisation solutions, evoML and Artemis AI, on its stand 4B22. TurinTech AI combines the power of Artemis AI and evoML to enhance businesses’ software and machine learning projects. Artemis uses Large Language Models (LLMs) to improve code quality and performance by identifying inefficiencies and benchmarking optimal changes, making code run faster, greener and more efficiently.

evoML, on the other hand, helps data scientists and business users accelerate the delivery of high-performing ML models from months to weeks and improve ML code efficiency for faster running speed and higher profitability. Together, they offer a comprehensive solution for automating and optimising both AI and code, allowing businesses to focus on innovation while saving time and resources.

Dr Leslie Kanthan, CEO and Co-founder at TurinTech AI commented: “We’re joining the Retail Technology Show to showcase how our AI-driven optimisation solutions can revolutionise retail. Sustainability remains a strategic imperative for the retail sector in an era where shopping behaviours and consumer journeys are rapidly evolving, and our products offer retailers the agility needed to adapt quickly and sustainably. Artemis AI can help optimise retail operation software, making it more efficient, while evoML ensures that machine learning models are perfectly aligned with customer needs and market demands. Together, they empower retailers to deliver exceptional, personalised shopping experiences that meet consumer’s expectations for speed, personalisation, and sustainability.”

 

To register to attend the Retail Technology Show, visit:

For further press information about TurinTech AI, please contact Roxana Dragomir, Marketing at TurinTech AI: roxana@turintech.ai

For further information about the Retail Technology Show, the event that is transforming retail today, please contact Sarah Cole: sarah.cole@fieldworksmarketing.co.uk

 

About TurinTech AI:

TurinTech AI is the leader in code optimisation for machine learning and data-intensive applications.

Founded in London in 2018 by PhDs from UCL, the company lists industry giants like Intel and AWS among its clients and partners. Its flagship products, evoML and Artemis AI, help companies easily unlock the full potential of their code and data through Generative AI. Clients benefit from operating faster while also being more sustainable and more efficient.

For more information visit TurinTech AI

Follow TurinTech on social media: LinkedIn and Twitter

 

About Retail Technology Show

Launched in April 2021, the Retail Technology Show is brought to you by the experienced team who previously organised the UKs largest retail exhibition: RetailEXPO (formerly RBTE).

For ten years we’ve been showing how to evolve ahead of the market, building a community of retailers, brands and hospitality providers with the courage to seize the opportunities ahead.

This event is unlike other expos. Here you can see, feel, hear and touch the future of retail. Be the first to see the ideas as they land. You can try out the tech and meet the people who make it happen. Not on a computer screen but face-to-face. The business interaction we have all missed. 

Our conference programme has always been known for bringing together the industry’s leaders and most influential voices. This year will be no exception. This will be the place to gain first-hand insight to shape your growth plans ahead.

Our Story – Retail Technology Show – Transforming Retail Today




]]>
Four Ways Developers Can Harness Artemis AI for Quality Code and Performance https://www.turintech.ai/four-ways-developers-can-harness-artemis-ai-for-quality-code/ Fri, 22 Mar 2024 10:28:43 +0000 https://www.turintech.ai/?p=227748 Businesses face unique challenges in software management, including dealing with legacy systems, maintaining code security, and ensuring optimal performance. Our AI developer platform for quality code, Artemis AI, addresses these challenges head-on, offering a range of capabilities designed to transform how businesses manage and optimise their code bases at scale.

The Challenges of Maintaining Software Quality

  • Overcoming legacy system limitations
  • Ensuring code security and compliance
  • Optimising performance
  • Simplifying code management and debugging processes

1. Code Upgrade

Problem

Companies often struggle with keeping their software applications up to date with the latest versions of the programming languages they are written in. This can lead to security vulnerabilities, inefficiencies, higher maintenance costs, and compatibility issues with modern systems.

Solution

Artemis AI’s code upgrade capability can fast track the process of modernising code from an older version to the latest, such as upgrading from C++17 to C++20, ensuring the software stays up-to-date with current standards.

Benefits

By modernising codebases, organisations can extend the life of their legacy systems, improve interoperability with new technologies, and enhance overall system security and maintainability. With Artemis AI, engineers can enhance their productivity by reducing the time spent on manually upgrading each application while keeping costs low.

2. Code Search 

Problem

As codebases grow, finding specific functionalities or understanding parts of the code can become increasingly difficult, especially for new team members or when dealing with poorly documented code.

Solution

Artemis AI’s chat interface can understand natural language queries, allowing developers to ask complex questions about the codebase, such as how different code elements work with each other. It also enables web searches (e.g. Google) for the latest open-source libraries and information.

Benefits

This advanced search capability fosters better team collaboration and knowledge transfer, making onboarding new developers faster and more efficient. It acts as a live documentation tool that can significantly reduce the learning curve for complex codebases.

3. Code Optimisation

Problem

Beyond general inefficiencies, specific code sections may be overusing resources due to suboptimal algorithms or unoptimised data structures, leading to enormous compute costs and scalability issues as the application grows.

Solution

Artemis AI can perform deep automatic code analysis to identify inefficiencies that slow down your codebase. It then suggests algorithmic improvements and data structure optimisations, providing alternative code snippets that perform the same tasks more efficiently. This capability is powered by our in-house, fine-tuned ArtemisLLM designed to enhance code performance.

Benefits

This targeted optimisation helps scale applications effectively, better manage resource utilisation, and prepare systems for future growth. It also aids in reducing environmental impact by making applications more energy-efficient.

4. Code Security

Problem

The ever-evolving landscape of cyber threats means that code must be continually assessed for vulnerabilities, including those that may not have been recognised at the time of development.

Solution

Artemis AI can spot security deficiencies across hundreds of repositories and suggest code changes to secure your code bases, ensuring that applications are not only efficient but also secure against potential threats.

Benefits

This proactive approach to security ensures that applications remain resilient against emerging threats, reduces the risk of costly security incidents, and maintains customer trust by safeguarding sensitive information.

Backed by a decade of research in evolutionary optimisation, Artemis AI enables developers to flexibly utilise LLMs (including our proprietary code performance LLM), coupled with evolutionary optimisation and robust testing, to generate high-quality, high-performance software code. By adopting Artemis AI, businesses can significantly advance software efficiency, security, and regulatory compliance, positioning themselves for success in a rapidly changing technological landscape.

]]> Artemis AI: Combining the Power of LLMs and Evolutionary Optimisation for High-quality and High-performance Software Code https://www.turintech.ai/artemis-ai-combining-the-power-of-llms/ Thu, 07 Mar 2024 10:23:12 +0000 https://www.turintech.ai/?p=227526 Programming: More Than Just Language

Since 2023, there’s been a real buzz around GenAI coding tools like GitHub Copilot, which quickly attracted a large user base of over 400,000 people within just a month! This trend marks a significant change in how we think about writing computer programs, as discussed in our previous blog about GenAI’s role in software development.

For those worried that AI might take over programming jobs, there’s good news. The toughest part of software creation isn’t the coding, but the design of software architecture, which is still determined by humans. Thus, coding isn’t just about knowing a language. It’s much more than that – it’s a mix of language, logical thinking, and creativity for problem-solving.

Most GenAI code generation tools utilise LLMs based on the transformer architecture. This architecture is excellent for recognising language patterns but it has notable limitations, such as a limited understanding of your business context and enterprise system, restricted capabilities in performance and security, and reliance on the quality of its training data, which is often sourced from public code repositories like GitHub and StackOverflow.

Consider, for example, a common software application like a photo editing app. AI tools might manage to write code for basic functions such as cropping, but they struggle with more complex tasks. These include designing an app that’s efficient across various devices or introducing novel features, tasks that require deep logical thinking and innovative problem-solving. This is why LLMs alone cannot solve the multifaceted and nuanced task of programming.

Coding, Fast and Slow

In his groundbreaking book “Thinking, Fast and Slow,” Nobel-prize-winning psychologist Daniel Kahneman introduces two distinct yet interconnected modes of thought: “System 1” is fast, instinctive, and emotional; “System 2” is slower, more deliberative, and logical. This dichotomy provides an insightful framework for understanding different approaches in AI-powered coding.

Coding Fast: The Pitfalls of LLM-based Code Generation

I’m not sure if this is your experience as well, but the recommendations I get from Copilot and scripts from GPT are often wrong. Sometimes I see inefficient solutions, code that has bizarre logic errors, or code that seems right but just doesn‘t work.
-Vickie Li, Engineer at Instacart

The “Coding Fast” approach mirrors “System 1.” Here, tools like GitHub Copilot exemplify rapid, intuitive code generation. They quickly churn out code based on learned patterns, embodying the fast, instinctual responses of LLMs. However, this rapidity can sometimes lead to bad code and gaps in understanding complex, unique project needs, much like the hasty decisions in human cognition.


Figure 1: Challenges of Using LLMs for Coding

Coding Slow: The Necessity for Deliberate Logic and Quality Assurance

In contrast, “Coding Slow” mirrors “System 2” thinking. It’s a methodical, thoughtful approach to programming. This process is about taking the time to logically structure code, ensuring quality, reliability, performance and scalability. It involves foreseeing how different parts of the code will interact, predicting potential issues, and creatively troubleshooting them. This deliberate approach allows developers to weave in logic and creativity, leading to high-performance code that LLMs cannot yet generate.

In essence, “Coding Slow” is about understanding the project’s broader context and long-term implications, ensuring every line of code serves a purpose and contributes to the overall performance of the application.


Figure 2: Fast Coding vs Slow Coding

An Evolutionary System for Optimal Code

Is it possible to integrate the speed of ‘fast coding’ with the meticulousness of ‘slow coding’ to create high-quality, high-performance code?

We think, yes. This synergy is best understood when we consider optimising code as an evolutionary process, similar to natural selection in biology. This is exemplified in TurinTech’s “Darwinian Data Structure Selection” paper, which discusses using evolutionary algorithms to boost application performance and minimise runtime, memory and CPU usage.

In this context, fast-generated code is like the initial gene pool, providing the foundational population that undergoes evolution. Over time, this code evolves, adapting to become better quality and tailored to its intended production environment and business goals. It’s a continuous cycle of adaptation and refinement, where each iteration brings us closer to peak performance.


Figure 3: Evolutionary Optimisation-Survival of the Fittest

LLMs Advance Rapidly, Is an Evolutionary System Still Necessary?

Leading AI companies like OpenAI and Google are unveiling new AI models and product features at an unprecedented pace. This raises the question: Is it still necessary to combine LLMs with an evolutionary system?

In a recent article, “The Shift from Models to Compound AI Systems,” Matei Zaharia, CTO and co-founder of Databricks, outlined four key benefits of using compound AI systems over relying solely on LLMs.

Let’s consider this within the context of software engineering:

  • Business Objectives Differ
    Each LLM has a set quality level and cost. However, to meet the wide range of application demands, businesses require the right mix of models to achieve an optimal balance of quality, cost, performance, and other metrics tailored to their specific needs.
  • Dynamic Systems
    Enterprise systems are inherently dynamic. LLMs, with their fixed knowledge derived from training data, lack the latest insights on in-house source code, proprietary libraries, tools, hardware and more.
  • Cost Efficiency
    Some programming tasks are more economically improved through iterative evolution than by scaling LLMs.
  • Enhanced Control, Safety, and Trust
    For instance, integrating LLMs with retrieval mechanisms can ensure code outputs are supported by credible references or verified data, enhancing trustworthiness.


Figure 4: Each LLM Has a Set Quality Level and Cost

Artemis AI: Combining the Power of LLMs and Evolutionary Optimisation for High-quality and High-performance Software Code

We’ve explored the idea of evolutionary coding for peak performance. But how does this translate into real-world applications? At TurinTech AI, backed by a decade of research in evolutionary optimisation, we’ve created Artemis AI, our GenAI developer platform for peak performance. This platform enables developers to flexibly utilise LLMs (including our proprietary code performance LLMs), coupled with evolutionary optimisation and robust testing, to generate high-quality, high-performance software code.


Figure 5: Artemis AI Combines LLMs with Evolutionary Optimisation

 1. Import code projects
First, developers have the flexibility to either import their code project directly from Git repositories (e.g., GitHub) or upload their project files to Artemis AI. It supports all major programming languages, including C++, Java, and Python.

2. Code Analysis
Developers can use Artemis Assistant to easily chat and interact with their code and gain a better understanding of the pieces of code that can be changed. For example, they can ask Artemis to pinpoint code segments for refactoring or discover opportunities to increase speed and security.

3. LLMs Recommendations
Next, Artemis AI utilises our proprietary LLMs specially finetuned for performance optimisation, or other LLMs (such as GPT-4, Gemini, Claude) of your choice, to rapidly generate new code versions. Users can choose multiple LLMs at the same time.

To make it easier to choose the right LLMs for your programming tasks, Artemis AI will show you the detailed metrics of each LLM, such as costs and inference time, and recommend the optimal combination of LLMs leveraging the Artemis AI Adaptive Learning Engine.

4. Validation Checks
Artemis AI conducts rigorous validation checks on code versions generated by LLMs in step 3. This includes compilation tests to ensure functionality, unit tests for detailed accuracy, security assessments for safeguarding, and IP reviews to avoid copyright issues.

5. Optimisation: Evolving LLMs’ Recommendations
Developers can specify their performance criteria (e.g., runtime) and provide additional context by chatting through the Artemis Assistant. Artemis AI then evolves validated code versions based on these custom criteria. The best versions are re-added to the existing pool (survival of the fittest) to generate new generations, creating a self-enhancing loop. Through continuous cycles of generation and evaluation, the latest code versions are getting close to the most optimal version.

6. Export Final code
Artemis AI not only provides the final optimised code but also delivers it with a comprehensive report on quality, security and performance, along with explanations of the changes made by LLMs. Developers receive ready-to-deploy, high-quality code, streamlining their development process with confidence. Additionally, developers have the option to directly download their code or to automatically generate a pull request in their Git repository, facilitating seamless integration into their existing projects.

Last but not least, Artemis AI can securely connect to your codebases and enterprise systems through on-prem deployment. Every time you run a project with Artemis AI, it gathers insights—your preferences, how different LLMs perform on your unique tasks, what code versions were accepted—and feeds this into its Adaptive Learning Engine. This process allows Artemis AI to continuously enhance its ability to deliver higher-quality code more swiftly for your future projects.

Navigating the rapidly evolving AI landscape to identify optimal practices for developing an evolutionary system for software engineering is challenging and resource-intensive. That’s why we’ve built Artemis AI: to enable you and your team to focus on creating value and stay ahead in the AI game, instead of building and maintaining tools, and to build a more productive and happier development team.

Unlock the Full Potential of your Code with Artemis AI

As discussed earlier, developing high-quality, high-performance software for intricate business scenarios is a massive challenge—one that often extends beyond the capabilities of LLMs alone. Artemis AI steps in, offering a powerful platform designed for enterprise-level quality, security, and performance. It empowers developers with a flexible framework that not only leverages LLMs and evolutionary optimisation but also integrates the critical thinking of human developers. This ensures a holistic approach to software development, enabling organisations to maximise AI impact with improved control and trust.

About the Author

Wanying Fang ​| TurinTech AI Marketing

]]> Code optimisation for better AI: TurinTech joins the IoT Insider podcast https://www.turintech.ai/turintech-joins-the-iot-insider-podcast/ Wed, 17 Jan 2024 11:29:41 +0000 https://www.turintech.ai/?p=227371 After rounding off a year where AI made waves through business and society, our CEO Dr Leslie Kanthan spoke to IoT Insider’s Editor Kristian McCann on the IoT unplugged podcast  for the first episode of the new year. Leslie covers a whole range of topics on AI and code optimisation. From discussing AI’s recent surge and the drawbacks of current models, to training software developers and how code optimisation can impact the future of the industry. 

The unseen costs of AI 

While AI products have been very visible, the effects of their growth are less noticed. “You’re seeing the advent of automation and AI in the general market right now, with products like ChatGPT […]”, explains Leslie. “What you’re not seeing is the huge amount of resources that’s consuming, the models, how large they are, how much the compute costs, the cloud costs, and so forth.”

These drawbacks are creating significant inefficiencies for businesses using AI, as well as having a sizeable environmental impact – the issues that code optimisation sets out to tackle. But how does it work? 

Leslie outlines the process of AI optimisation, which involves identifying “inefficient elements and components” in the source code, generating new code using AI, and comparing the improvements made to the original code. For businesses, this is particularly effective given that “previously, optimisation techniques were a manual process […] it could take years for several developers”. 

Overcoming AI pain points

Companies integrating or using AI can suffer from common pain points. As Leslie states, “everything is now about compute power and, consequently, compute cost. […] The amount of energy consumption in this process is significant.” So, by implementing code optimisation, “you have a business impact straight away”. 

And the impact of code optimisation goes beyond the direct impact on AI models. Production teams can get bottlenecked and stuck in their process. So, as Leslie touches on, “if you can optimise your code so it’s faster, you’re getting it into production much quicker.” The subsequent benefits of this also means, ultimately, more profitability. 

Those who stand to benefit

What industries does Leslie see benefiting from code optimisation the most? Leslie picks out the financial and technology sectors – and everywhere where they are doing continuous integration. 

While it can deliver great business outcomes, Leslie is keen to flag how the process can also bring great benefits to software teams. He discusses feedback from a client about how developers are using their tool to “learn and improve themselves” by showing where the code can be better optimised. Rather than feeling AI is replacing them, it cultivates more acceptance, showing it can augment their capabilities. 

Machine learning in IoT and future trends

Looking ahead, Leslie outlines how being able to optimise operating code “can reduce the energy consumption of IoT devices” as well as helping to identify sensor data and monitor temperatures. The huge datasets in the IoT world make it perfect for AI optimisation. 

This optimisation can be used for hardware, models and potentially even devices. Referencing his own phone, Leslie notes how leaving apps on idle, for instance, consumes large chunks of battery power, and so there is great potential for optimising battery tech. 

There’s a shared buzz about AI. Everyone is looking at it, but there are huge costs attached to it, with millions spent on building and refining the LLMs out there. AI optimisation will help to maximise the benefits of these models for businesses while simultaneously reducing costs.

To find out more, listen to the full podcast episode here. To stay up to date and follow our progress, check out our LinkedIn and Twitter.

]]>
Making data centers sustainable with Artemis AI https://www.turintech.ai/data-centers-sustainable-with-artemis-ai/ Mon, 11 Dec 2023 07:59:19 +0000 https://www.turintech.ai/?p=227174 Background

Data centres are critical infrastructure in the current tech space. Rather than investing in in-house data and software hosting facilities, companies purchase server space from third-party vendors. This is significantly easier and cost-efficient to implement. On the downside, maintaining data centres can have considerable negative effects on the environment.

The Environmental Challenge of Data Centres

The significant challenge of maintaining data centres is the vast amount of electricity they consume. The energy demand of data centres is compounded by activities such as cooling. In a data centre, cooling systems consume about 30-50% of the total electricity usage.

With the increasing demand for digital services, energy requirements of data centres are surging, raising concerns about the environmental sustainability of data centres. There is a need to find solutions that can make data centres environmentally sustainable, without having to compromise on performance or quality of output.

Code optimisation as a solution

Poor quality code puts significant strain on the human and computational resources in software development. For instance, in 2020 alone, the estimated Cost of Poor Software Quality (CPSQ) in the United States was a whopping $2.08 trillion. This staggering figure includes expenditures on rework, lost productivity, and customer dissatisfaction resulting from subpar code.

Code optimisation is an often overlooked solution to the problem of reducing data centre energy consumption, as well as costs. Code optimisation enables developers to remove redundancies in code bases ultimately leading to code that achieves the same tasks with much higher efficiency.

Artemis AI: Pioneering data centre sustainability via automated code optimisation

 

The challenge of code optimisation is that optimising code bases can itself be a strain on developers.

To remove the pain points of manual code optimisation, TurinTech AI has developed Artemis AI, a state-of-the-art automated code optimisation platform.

Artemis AI is capable of optimising code bases in a matter of minutes, resulting in more efficient software. As energy consumption directly correlates with software efficiency, implementing Artemis AI can lead to significant energy savings.

We highlight some of Artemis AI’s benefits below:

  1. Energy efficiency: Artemis AI employs cutting-edge algorithms that optimise code to perform more tasks with fewer computational resources. This means that the same software and data processing tasks, which are traditionally energy-intensive, now consume significantly less power. This reduction in energy use directly contributes to lower carbon emissions from data centers, aligning with global sustainability efforts.
  2. Substantial cost reduction: With reduced energy demands, companies can expect a noticeable decrease in their electricity bills. This cost-saving aspect of Artemis AI extends beyond just energy consumption. By streamlining code, the platform also minimises the need for hardware upgrades and maintenance, further driving down operational costs.
  3. Enhanced system performance and reliability: Beyond energy savings, optimised code means faster processing times and more reliable system performance. This translates into quicker transaction processing, real-time data analysis, and overall improved service delivery to clients. Enhanced performance also reduces the likelihood of system downtime, a critical factor in maintaining customer trust in the era.
  4. Scalability and flexibility: A unique feature of Artemis AI is its adaptability to various software and coding environments. This flexibility ensures that whether a company is dealing with legacy systems or the latest technologies, Artemis AI can seamlessly integrate and optimise codebases.

By adopting innovations such as Artemis AI, companies not only meet their ESG goals effectively but also gain a competitive edge through enhanced efficiency and reduced costs.

]]>
Improving energy efficiency in the financial sector with Artemis AI: Pioneering a greener future https://www.turintech.ai/improving-energy-efficiency-in-the-financial-sector-with-artemis-ai/ Fri, 01 Dec 2023 09:19:27 +0000 https://www.turintech.ai/?p=227157 In a time where digital transformation dictates the financial industry, companies are finding it challenging to decide what technologies to adopt, and at what scale. The level of technology adoption depends on a few critical factors, one of which is resource commitment. Increasing digitalisation and technology adoption can be extremely demanding in terms of computational and human resources.

Energy consumption is a key metric to consider in implementing cutting-edge technology solutions, as rising energy consumption leads to higher overall costs and carbon emissions. Against this backdrop, integrating sustainable technologies is imperative.

In this article, we look at how Artemis AI, a signature code optimisation platform developed by TurinTech, can help companies in the financial industry implement technology solutions more sustainably.

Performance vs emissions: The dilemma in software development

The finance sector is known for its intensive use of data and computational resources. Financial institutions such as banks particularly work with large amounts of sensitive data. This necessitates such institutions to look for reliable and scalable software solutions to make the best data-driven decisions.

Activities such as detecting fraudulent transactions, credit approval and instant payment processing are critical, high-risk, and time-sensitive tasks, making it essential for financial organisations to invest in the best possible infrastructure to ensure the integrity of systems. However, from everyday banking transactions to high-frequency trading, the energy demands to implementing such technology solutions in the finance sector are immense. This ultimately pushes up company costs, increases carbon emissions, and significantly impacts the environment.

How big is this beast?

Looking at energy consumption and carbon emission figures is helpful to understand the impact of the financial industry on the environment.

Energy costs: A server is estimated to consume around 1,800 kWh per year1. If a financial institution utilises 1,000 servers for computational tasks, this will cost the company around £612,000 per year in energy costs2.

Carbon emissions: WWF UK and Greenpeace UK, in an analysis of the global emissions of the UK financial sector, estimate carbon emissions associated with selected UK private financial institutions to amount to 805 million tonnes CO2 equivalent, based on year-end disclosures from 2019.

This is almost 1.8 times the UK’s domestically produced emissions. If the financial institutions in this study were a country, they would have the 9th largest emissions in the world – larger than Germany’s (776 million tonnes CO2 equivalent) and Canada’s domestic emissions (763 million tonnes CO2 equivalent).

Figure 1: Emissions of UK financial institutions, recreated from: https://www.wwf.org.uk/sites/default/files/2021-05/uk_financed_emissions_v11.pdf

Artemis AI: A step towards sustainable finance

In an industry where software performance is critical, code optimisation is an efficient but often overlooked approach to reducing the energy consumption of software, without compromising on performance.

Artemis AI is a state-of-the-art automated code optimisation platform developed by TurinTech AI.

Artemis AI is capable of optimising code bases in a matter of minutes, resulting in more efficient software. As energy consumption directly correlates with software efficiency, implementing Artemis AI can lead to significant energy savings.

TurinTech AI research and calculations show that with Artemis AI, the energy consumption of servers can be reduced by as much as 46% per server, leading to cost savings of £281,520 per year for a business with 1,000 servers.

The Greenhouse Gas (GHG) Protocol defines three scopes for GHG accounting and reporting purposes. Scope 1 covers direct GHG emissions, scope 2 covers emissions from the generation of purchased electricity consumed by the company, and scope 3 covers other indirect emissions.

By optimising code, Artemis AI helps companies reduce the computational load on servers and data centers. This results in lower carbon emissions associated with data processing and storage, as well as running software, directly contributing to the reduction of scope 1 and scope 2 emissions of a business. Code optimisation may also lead to reductions in scope 3 emissions, for instance via reduction of emissions in products sold.

Artemis AI also helps companies significantly reduce energy consumption. Typical server wattage is estimated to be around 1,800 kWh/server/annum, whereas greenhouse emissions are estimated to be 0.3712 CO2-eq in kgs/kWh. By optimising code for better performance and lowering energy consumption by 46% on average, we estimate that Artemis AI can help companies reduce emissions from 668 CO2-eq in kgs/server/year to 360 CO2-eq in kgs/server/year3.

Given the importance of ESG in the current business landscape, Artemis AI demonstrates how technology can be harnessed to meet ESG goals. As the financial sector continues to evolve, tools like Artemis AI will be instrumental in ensuring that this evolution is not just technologically advanced but also environmentally responsible.

]]>
TurinTech’s Commitment to AI Ethics and Governance https://www.turintech.ai/turintechs-commitment-to-ai-ethics-and-governance/ Fri, 24 Nov 2023 10:58:23 +0000 https://www.turintech.ai/?p=227119 The UK recently held the AI Safety Summit to consider the risks of AI, and to discuss how risks can be mitigated through internationally coordinated action. The Bletchley Declaration by Countries Attending the AI Safety Summit, notes that “actors developing frontier AI capabilities, in particular those AI systems which are unusually powerful and potentially harmful, have a particularly strong responsibility for ensuring the safety of these AI systems, including through systems for safety testing, through evaluations, and by other appropriate measures”.

In an era where AI has a significant presence across many aspects of our lives, TurinTech is dedicated to building AI products that are not only innovative but also founded on the principles of ethical practice and safety. In an effort, we reaffirm our commitment to the responsible development of AI, and have outlined our guiding principles in the development of AI.

Robustness is Our Cornerstone

At the core of our ethos is the belief that AI must be secure, reliable, and efficient. Our two key products, evoML and Artemis AI, are carefully crafted, maintained, and tested to ensure that they do what we think they are doing. Our software testing processes include meticulous evaluation to ensure that the systems are able to deal with the out of ordinary, ensuring that our products remain resilient and dependable in this ever-changing AI landscape.

Responsibility in Innovation

Accountability and answerability are ingrained in the TurinTech culture. We scrutinise our development decisions and document them in detail to ensure rigour and clarity in the decision-making process. Careful scrutiny enables us to ensure that the AI we create, as well as the decisions it facilitates, bear the mark of our dedication. We advocate for and embody responsible innovation, ensuring that every outcome of our AI is one we can uphold with pride.

Fairness and Impartiality

There is a pressing need for AI systems to be fair and free of bias, including biases that are present in the real world. Ensuring our AI systems are fair and free of bias is a primary objective at TurinTech. Our AI ethics and governance process includes continuous assessment of development methods and data, and rigorous bias mitigation frameworks. This enables us to ensure that AI tools we develop can be utilised for fear of unfair outputs and outcomes.

Transparency and Explainability

While it is essential for AI systems to be accurate and efficient, they also need to be transparent and explainable. On the one hand, we ensure that the decisions we make as an AI company are transparent. On the other hand, we take steps such as providing users of our products with the source code of the models developed with our platform, fostering a deeper understanding and greater trust in our technology. This open approach gives greater transparency to AI-based decisions and empowers users to confidently engage with our tools.

As TurinTech continues to lead and innovate in the field of AI, code optimisation, and machine learning, we will continuously visit and review our guiding principles to ensure we meet the best practices of the industry. We invite you to join us on this exciting journey as we create a future where AI is ethical, safe, and beneficial for all.

]]>