Marketing Spend Optimization: Why AI Is the Key to Higher ROI

Marketing Spend Optimization: Why AI Is the Key to Higher ROI

Written by

Jacob Zweig, Managing Director

Published

April 16, 2025

AI & Machine Learning
AI-Driven Marketing

With marketing efforts spread across countless channels, each dollar spent—and each customer touchpoint—has greater impact and complexity.

Unfortunately, many brands still rely on outdated marketing models: last-click attribution, rigid budget plans, and disconnected reporting systems. These traditional approaches can’t capture the full story, leading to missed opportunities and wasted spend.

It’s time to move beyond guesswork. With the rise of AI-powered tools like Multi-Touch Attribution (MTA) and Media Mix Modeling (MMM), brands can now track the complete customer journey, attribute value across every channel, and continuously optimize their marketing strategy in real time.

In this post, we’ll explore how AI is reshaping marketing strategy—from smarter budget allocation to advanced attribution models—and how OneSix can help you turn insights into impact.

200%

Increase in return on ad spend (ROAS)

15%

Increase in sales

Why Traditional Marketing Strategies Fall Short

Many marketing teams still rely on legacy models—last-click attribution, manual reporting, and siloed channel analysis. These outdated methods make it nearly impossible to understand the full customer journey or justify budget allocation decisions.

In a world where customers interact with brands across multiple devices, platforms, and stages of decision-making, traditional marketing approaches simply can’t keep up.

AI-Driven Budget Allocation Optimization

AI models can analyze historical performance, campaign goals, and channel effectiveness to recommend how to allocate your marketing budget across platforms like Google Ads, social media, email, and display. Instead of relying on static budgets set months in advance, AI enables dynamic, responsive decision-making—so you’re always investing where it counts.

Multi-Touch Attribution (MTA)

See the full picture of the customer journey.

Understanding the effectiveness of your marketing efforts is no small feat—especially when customer journeys span a wide array of online and offline channels. That’s where Multi-Touch Attribution (MTA) comes in.

MTA is a powerful framework that helps marketers understand how different touchpoints—like social media ads, search campaigns, email marketing, and website visits—contribute to a customer’s decision to buy or engage. Unlike basic models that assign all the credit to the first or last interaction, MTA assigns value to multiple touchpoints across the journey, providing a more accurate, data-informed view of marketing performance.

Traditional Attribution Models: A Limited View

Before diving into advanced techniques, it’s helpful to understand where many marketers start:

While easy to implement, these models often produce incomplete or misleading insights, especially when trying to optimize spend across diverse marketing channels.

Modern MTA Models: Deep Learning for Deeper Insight

As marketing channels become more complex and customer journeys more fragmented, modern AI-driven models are filling the gap. Advanced MTA approaches—like LSTM networks, Transformers, and Temporal Convolutional Networks (TCNs)—can model sequential customer behavior, learn from historical data, and accurately assign value to each touchpoint.

LSTM-Based Attribution

Long Short-Term Memory (LSTM) networks are a type of recurrent neural network (RNN) ideal for analyzing sequences. They can process long customer journeys, understand the timing and order of interactions, and identify which touchpoints had the greatest influence on a conversion. By calculating gradients (i.e., how much a small change in one touchpoint affects the outcome), LSTM models can attribute precise credit to each step along the way.

Transformer-Based Attribution

Transformers—famous for powering models like ChatGPT—excel at understanding relationships between touchpoints, regardless of distance in the sequence. Their self-attention mechanism lets the model weigh how every touchpoint relates to every other, enabling highly nuanced attribution. This approach is ideal for complex customer journeys with many simultaneous interactions across channels.

Temporal Convolutional Networks (TCNs)

TCNs are another powerful option for modeling time-ordered data. Unlike RNNs, they use dilated convolutions to analyze sequences in parallel, which leads to faster processing and high accuracy. TCNs work especially well when journey lengths vary from customer to customer.

Applications of MTA: From Insight to Action

So how do these models translate into better business outcomes?

Smarter Budget Allocation

MTA helps marketers identify true ROI across channels and adjust budgets accordingly. For instance, if social media drives early engagement but email converts, you can confidently invest in both.

Customer Journey Optimization

MTA reveals the actual sequence of touchpoints that lead to bookings or purchases. This insight helps refine not just messaging and creative, but also the order, timing, and targeting of campaigns.

Hyper-Personalization

With granular attribution data, you can tailor marketing strategies to specific segments—delivering more relevant offers across the right channels.

From Attribution to Action: Budget Optimization in Practice

Once an MTA model is trained, it produces attribution weights that quantify each touchpoint’s influence on conversions. These weights can be used to solve a mathematical optimization problem: how to distribute your marketing budget across channels to maximize conversions or revenue.

For example, if your MTA model outputs these weights:

You can use optimization techniques (e.g., linear programming or gradient descent) to allocate your budget in a way that maximizes return, while also considering constraints like minimum spend thresholds or strategic goals.

OneSix helps brands take these results and apply them in the real world—building automated budget optimization systems that adjust spend in real time based on performance data and predictive insights.

Media Mix Modeling (MMM)

Optimize every marketing dollar you spend.

In today’s privacy-conscious environment, Media Mix Modeling (MMM) is gaining traction as a powerful, cookie-free approach to understanding marketing impact.

MMM uses aggregated historical data to quantify how different marketing activities—like TV, paid search, influencer campaigns, or email—affect outcomes like revenue, conversions, or customer lifetime value. It’s especially valuable when dealing with long buying cycles, offline conversions, or regional campaign variations.

Why Brands Are Turning to MMM

As marketing strategies grow more complex, so does the challenge of proving ROI. MMM addresses this by offering:

Improved ROI Visibility

MMM pinpoints which marketing efforts actually drive results, helping you spend smarter across channels.

Increased Accountability

With clear metrics on effectiveness, you can confidently justify your budget decisions to leadership.

Real-Time Optimization

With modern tooling and infrastructure, MMM isn’t just a once-a-year exercise—it can be run regularly to adapt to market changes.

Multi-Touch Influence

MMM can capture the cumulative impact of various touchpoints—even those traditionally difficult to measure, like print media or influencer impressions.

Privacy Resilience

Unlike methods that rely on user-level tracking or cookies, MMM uses aggregate data, making it a future-proof strategy in a privacy-first world.

Reduced Bias in Decision-Making

Advanced MMM models automate decisions around ad fatigue, seasonality, and spend thresholds, removing guesswork and gut-feeling from critical marketing calls.

How MMM Works: The Mechanics Behind the Model

MMM builds a statistical model that connects marketing activities and external factors to your key business outcomes. Here are two of the most important concepts:

Adstocking

Not all marketing effects are instant. Adstocking accounts for the delayed impact of a campaign—for example, the lingering effect of a billboard or a TV commercial. This allows the model to recognize how impressions continue to influence behavior days or weeks after the initial exposure.

Saturation

Every channel has a point of diminishing returns. MMM models use saturation curves (often modeled with a Hill function) to understand when added spend in a channel stops yielding proportional returns. This is crucial when planning budgets across multiple media types with vastly different spend efficiency curves.

MMM also adjusts for external factors like pricing, market conditions, and seasonality—ensuring you isolate marketing’s true impact.

Off-the-Shelf vs. Bespoke MMM: Choosing the Right Fit

There are a number of tools available to implement MMM—each with its strengths and trade-offs.

Off-the-Shelf Tools
Custom/Bespoke MMM Solutions

For brands with unique needs—such as regional campaign structures, legacy data systems, or complex business rules—a custom MMM model may be the best route. These models offer:

OneSix partners with clients to design and implement bespoke MMM solutions, from initial data exploration through production-ready deployment—ensuring that the model aligns tightly with your business goals and marketing operations.

From Modeling to Optimization: Turning Insights into Action

Once an MMM model is built, it generates a set of channel-level performance metrics—like marginal ROI and efficiency curves. These metrics feed directly into budget optimization models, helping you decide how much to spend on each channel to maximize ROI, given your total budget and business constraints.

Example:

Using these inputs, OneSix can help you solve for the optimal budget allocation using methods like linear programming or Bayesian optimization—automating the process of getting the most out of your spend.

Move Beyond Guesswork. Start Optimizing with AI.

Modern marketing requires more than clever creative—it demands clarity, precision, and adaptability. AI-powered solutions like MTA and MMM help brands cut through complexity and optimize every dollar. At OneSix, we build advanced marketing analytics frameworks that drive visibility, efficiency, and smarter decisions. Ready to make your marketing work smarter? Let’s talk about how to elevate your strategy and optimize your spend.

Contact Us

Right Message, Right Time: How AI is Transforming Modern Marketing

Right Message, Right Time: How AI is Transforming Modern Marketing

Written by

Jacob Zweig, Managing Director

Published

April 1, 2025

AI & Machine Learning
AI-Driven Marketing

Today’s customers don’t just want personalized experiences—they expect them. Whether shopping online, engaging with content, or exploring new services, people are looking for brands that understand their needs and speak to them on an individual level.

The problem? Traditional batch-and-blast marketing simply doesn’t cut it anymore. Generic messages sent to broad audiences risk being ignored—or worse, driving customers away.

To stay competitive, brands must move beyond one-size-fits-all campaigns and embrace AI-driven personalization. By harnessing the power of first-party and third-party data, businesses can gain deeper insight into customer behavior and deliver targeted, real-time messaging that increases engagement, drives loyalty, and boosts long-term value.

At OneSix, we help companies put data to work—building smarter, adaptive marketing strategies that deliver the right message to the right customer at exactly the right time. By integrating AI into their marketing strategies, our clients are unlocking measurable, data-backed results:

10%

Increase in customer visits

6%

Increase in profitability

15%

Increase in sales

Fueling Personalization with First- & Third-Party Data

Data is more than just a business asset—it’s the foundation for delivering relevant, high-impact customer experiences. By combining first-party and third-party data, brands can unlock deeper insights, close data gaps, and build smarter, more personalized marketing strategies.

Higher-Quality Insights for Better Customer Profiles

First-party data—collected directly from customer interactions across websites, apps, and transactions—offers high-quality, trustworthy insights into individual behaviors, preferences, and purchase history. This rich data allows brands to build detailed customer profiles and target specific segments with precision.

When paired with third-party data, which provides broader market context and behavioral trends, these profiles become even more robust. The result is a more complete view of each customer and better-informed marketing decisions.

Enhanced Experiences and Differentiated Value

First-party data helps identify customer needs, pain points, and preferences in real time—allowing brands to deliver timely, relevant offers and personalized recommendations. This not only improves the customer experience but also builds long-term loyalty.

Third-party insights enhance this by offering visibility into external factors—like competitive activity, seasonal trends, or consumer behaviors across other platforms—enabling brands to refine their value propositions and stand out in a crowded market.

Smarter Targeting and Hyper-Personalization

A combined data approach allows brands to fine-tune their targeting strategies. First-party data provides individual-level detail, while third-party data offers a broader lens into market behavior.

Together, they enable hyper-personalized campaigns—whether it’s tailoring product recommendations, suggesting relevant content in real time, or customizing messages for specific audience segments across digital channels.

Predictive Analytics That Drive Growth

While first-party data offers a historical lens into customer behavior, third-party data adds predictive power when fed into AI models. This combination supports:

By leveraging both datasets through AI, brands can make smarter, faster decisions that anticipate customer needs and drive revenue growth.

Smarter Engagement Through AI

AI is fundamentally changing how brands understand, target, and engage with their audiences. From acquiring new customers to deepening relationships with loyal ones, AI-driven models enable personalized, data-informed strategies that deliver measurable results across the customer journey.

Customer Segmentation Modeling

Segmentation powered by AI goes far beyond traditional demographic-based grouping. For unknown or prospective users, techniques such as clustering and lookalike modeling allow brands to generalize insights from known customer behaviors to broader audiences across digital platforms. These models help define high-value segments and guide user acquisition strategies.

For known users, AI enables dynamic segmentation based on up-to-the-moment behavioral data, allowing for hyper-targeted messaging that evolves as the customer does.

Real-World Example

A retail brand may use lookalike modeling to identify new prospects who mirror the behavior and preferences of their most valuable customers, tailoring digital advertising to attract high-intent buyers.

Customer Propensity Modeling

Propensity models leverage a wide range of data—including behavioral, contextual, and third-party inputs—to predict the likelihood of specific customer actions. These models help marketers identify which customers are most likely to purchase, upgrade, convert, or churn, allowing for more effective targeting and optimized marketing spend.

With AI, marketers can prioritize offers, customize messaging, and allocate resources based on real-time intent rather than static assumptions.

Real-World Example

A SaaS company could use propensity scoring to identify which website visitors are most likely to sign up, and immediately serve personalized trial offers through digital ads or email campaigns.

Real-Time Personalization

When engaging with known customers, AI plays a critical role in determining what to do next. By combining models such as Lifetime Value (LTV), churn prediction, and next-best-action optimization, brands can understand likely customer behavior and tailor marketing strategies accordingly.

Next Best Action (NBA) models go beyond traditional rule-based decision systems by dynamically adapting to real-time data and customer context. Rather than relying on static flows or pre-defined triggers, AI-driven NBA strategies evaluate a wide range of inputs—behavioral signals, preferences, environmental context—to surface the most relevant message, offer, or action at any given moment.

These models continuously learn from customer interactions across digital and physical touchpoints, enabling real-time personalization at scale. Whether it’s identifying the best time to send a message, recommending the right offer, or selecting the most effective channel, AI helps ensure each interaction is relevant, timely, and impactful.

Real-World Example

A leading casino implemented a real-time marketing engine, built on Next Best Action modeling, to personalize offers based on both in-casino activity and online behavior. The result was increased engagement, a 10% increase in player visits, and a 6% boost in player profitability. Explore the full case study →

The Future of Marketing Is Personalized

AI is no longer a nice-to-have—it’s a competitive necessity. In a marketplace where timing, relevance, and experience are everything, AI-driven personalization empowers brands to meet customers where they are with messaging that resonates.

From smarter segmentation and predictive targeting to real-time personalization and next-best-action optimization, AI enables marketing strategies that are more adaptive, impactful, and customer-centric.

Get Started

Ready to move beyond generic campaigns? OneSix helps companies turn data into meaningful customer experiences that drive loyalty and long-term value. Get in touch with us for a consultation.

Contact Us

AI’s Next Big Shift: What Business Leaders Need to Know

AI’s Next Big Shift: What Business Leaders Need to Know

Written by

James Townend & Nina Singer, Lead ML Scientists

Published

March 19, 2025

AI & Machine Learning
AI Agents & Chatbots

Artificial Intelligence continues to transform the tech landscape at breakneck speed. AI is driving innovation in every sector from how we process queries to the tools we use for automation. Below are five key trends shaping AI’s evolution in 2025—and why they matter.

1. Train Inference-Time Compute

"AI designers have a new control lever – spend more compute per query for higher accuracy and better reliability."
James Townend
Lead ML Scientist

Traditionally, AI performance scaled primarily with training-time compute: We spent more resources to train bigger models on more data. Now, inference-time compute—the compute spent when a trained model answers a query—has become a major new control lever.

Why It Matters

The Bigger Picture

As models shift more reasoning to real-time computation, the hardware and infrastructure for user-facing AI will need to scale to support these heavier inference workloads. This also opens opportunities for edge inference, which involves moving some computation onto devices like phones, robots, and IoT systems.

2. Enterprise Search Is Good Now

"LLMs have dramatically improved search through RAG, unlocking value from previously challenging document stores."
James Townend
Lead ML Scientist

Enterprise search was an afterthought for years, plagued by siloed data sources, poorly structured documents, and lack of meaningful relevance signals. Modern vector embeddings have changed everything, making Retrieval-Augmented Generation (RAG) the new standard.

Why It Matters

The Bigger Picture

With vector search and RAG, enterprise search resembles a true domain-expert assistant. Organizations finally have the tools to leverage vast document stores efficiently. It’s akin to what Google did for the early public internet—now applied to private, internal data.

3. AI Agents

"AI agents transform software interaction by automating multi-step workflows."
James Townend
Lead ML Scientist

The next revolution in AI-driven automation is the rise of AI Agents: task-oriented, often autonomous systems that can robustly interact with software and data.

Why It Matters

Important Considerations

Agents remain unpredictable at times, owing to LLMs’ black-box nature. For critical systems:

The Bigger Picture

We’ll see agents increasingly embedded in customer support, “low-code” software platforms, and legacy system integrations. However, organizations must weigh the potential for cost overruns (since agents call models often) against the productivity gains they deliver.

4. The Future of Openness

"As competition intensifies, we see an uptick of LLMs embracing open weights. Distilled models emerge to close the gap."
Nina Singer
Sr. Lead ML Scientist

Competition among large language models is intensifying, and with it comes a surge in open-weight models. Alongside these publicly accessible models, distilled versions—trained to mimic larger “teacher” models—are emerging as credible, cost-effective alternatives.

Why It Matters

The Bigger Picture

Open-source foundational models empower companies and researchers worldwide to build specialized solutions without huge licensing fees. This explosion in open models not only accelerates AI adoption but also raises questions about responsible use, governance, and the sustainability of massive training runs.

5. Capability Overhang

"As AI advances, new questions emerge: How else can we harness its potential? Who else can contribute to its development? How do we control its impact?"
Nina Singer
Sr. Lead ML Scientist

“Capability overhang” describes a scenario in which technology’s potential outstrips its immediate adoption and integration. We’re already seeing this with LLMs, where industrial and societal constraints—such as regulatory hurdles, skills shortages, and legacy system inertia—lag behind the AI’s actual abilities.

Why It Matters

The Bigger Picture

As AI’s capacity grows, the conversation shifts from “can we do it?” to “how should we do it responsibly?” The real power of LLMs will come from well-regulated, well-structured integrations that extend beyond flashy demos into meaningful, society-wide improvements.

Shaping the AI-Driven Future

From inference-time compute revolutionizing AI economics to enterprise search finally delivering on its promise, these five trends highlight a pivotal moment in AI’s evolution. Agents will streamline workflows, open-source models will democratize access, and the looming capability overhang challenges everyone—from entrepreneurs to regulators—to adapt responsibly.

As the AI frontier broadens, it’s up to us—innovators, policymakers, and everyday users—to steer its tremendous potential toward positive, inclusive progress. The question is no longer if AI can do something, but rather how we’ll harness its power to create lasting impact.

Get Started

Integrate these insights into your business strategy and make the most of AI and the power it has. OneSix can help you utilize emerging trends in AI and have first-hand experience of the impact it can have on your business.

Contact Us

A Practical Guide to Data Science Modeling: Lessons from the Book ‘Models Demystified’

A Practical Guide to Data Science Modeling: Lessons from the Book ‘Models Demystified’

Written by

Brock Ferguson, Managing Director

Published

February 10, 2025

AI & Machine Learning
Forecasting & Prediction

In the book Models Demystified, OneSix Sr. ML Scientist Michael Clark delves into the fundamentals of modeling in data science. Designed for practical application, the book provides a clear understanding of modeling basics, an actionable toolkit of models and techniques, and a balanced perspective on statistical and machine learning approaches.

In this blog post, we highlight the key insights from his work, diving into various modeling techniques and emphasizing the importance of feature engineering and uncertainty estimation in building reliable, interpretable models.

By mastering these fundamentals, you’ll not only unlock the full potential of predictive analytics but also equip yourself to make smarter, data-driven decisions. So, let’s demystify the science behind the models and turn complexity into clarity!

What is Data Science Modeling?

At its core, a model is a mathematical, statistical, or computational construct designed to understand and predict patterns in data. It simplifies real-world systems or processes into manageable abstractions that data scientists can utilize to derive meaningful insights and actionable recommendations. Models facilitate:

What are the Main Types of Data Science Models?

Data science encompasses various modeling techniques, each serving distinct purposes. Here’s an overview of the primary categories.

Linear Models and More

Linear Regression and Extensions

This category encompasses linear regression as a starting point, and extends to generalized linear models (GLMs), including logistic regression for binary targets, and Poisson regression for counts. Further extensions include generalized additive models (GAMs) for non-linear relationships, and generalized linear mixed models (GLMMs) for hierarchical data.

Special Considerations

A variety of modeling approaches are necessary when working with specific data types, such as time-series, spatial, censored, or ordinal data. These data types often exhibit unique characteristics that may necessitate specialized modeling techniques or adaptations to existing models.

Ease and Interpretability

Linear models are prized for their ease of implementation, and their ability to provide relatively clear and interpretable results. They also serve as useful baselines for more complex models, and often are difficult to outperform on simple tasks.

Machine Learning Models

Modeling Framework

Machine Learning (ML) provides a framework for systematically evaluating and improving models. It involves training models on historical data with a primary goal of making predictions or decisions on new, unseen data. The choice of model depends on the problem type, data characteristics, and desired performance metrics.

Penalized Regression

Least Absolute Shrinkage and Selection Operator (LASSO) and Ridge Regression are penalized versions of linear models commonly used for both regression and classification in a machine learning context.

Tree-Based

A tree-like model of decisions and their possible consequences, including chance event outcomes and resource costs, to model complex decision-making processes. Following are some of the tree-based machine learning algorithms:

Neural Networks/Deep Learning

Since these models are inspired by the human brain, they are widely used in deep learning applications. Neural networks mimic human brain architecture to identify complex, non-linear relationships in data, enabling deep learning applications.

Causal Models

Identifying Effects

Causal models shift most of the modeling focus to identifying the effects of a treatment or intervention on an outcome, as opposed to predictive performance on new data. In determining causal effects, even small effects can be significant if they are consistent and replicable. For example, a small effect size in a clinical trial could still be meaningful if it leads to a reduction in patient mortality.

Random Assignment and A/B Testing

Random assignment of treatments to subjects is the gold standard for estimating causal effects. A/B testing is a common technique used in online experiments to determine the effectiveness of a treatment.

Directed Acyclic Graphs (DAGs)

Graphical representations that depict assumptions about the causal structure among variables, aiding in understanding and identifying causal relationships. These pave the way for different modeling approaches to help discern causal effects.

Meta Learners

Provides a framework to estimate treatment effects and determine causal relationships between a treatment and an outcome. You can use the following types of meta-learners to assess causal effects:

Why is Data Preparation and Feature Engineering an Important Part of the Modeling Process?

Effective modeling hinges on thorough data preparation and feature engineering. The following steps ensure data quality and compatibility with algorithms, directly influencing model performance.

What is the Role of Statistical Rigor in Uncertainty Estimation?

Addressing uncertainty is integral to robust data science modeling. Statistical rigor ensures reliable predictions and enhances trust in model outputs. It involves:

1. Quantification Through Confidence Intervals

Confidence intervals offer a clear, quantifiable range within which model parameters are likely to fall. This approach ensures that we account for variability in estimates, highlighting the degree of precision in model predictions.

2. Prediction Intervals for Future Observations

Unlike confidence intervals, prediction intervals extend uncertainty quantification to individual predictions. These intervals provide a realistic range for where future data points are expected, accounting for the inherent variability in outcomes.

3. Bootstrapping for Distribution Estimation

Bootstrapping is a statistically rigorous, non-parametric technique that involves resampling the data to estimate the uncertainty of parameters and predictions. It is particularly useful when traditional analytical solutions are infeasible, providing robust insights into variability.

4. Bayesian Methods for Comprehensive Uncertainty Estimation

Bayesian approaches allow for a more comprehensive treatment of uncertainty by incorporating prior information and deriving posterior distributions. This method propagates uncertainty through the entire modeling process, offering a more nuanced understanding of variability in predictions.

5. Model Validation and Testing

Employing techniques such as cross-validation ensures that model predictions generalize well to unseen data. Rigorous testing methods reveal the extent of overfitting and provide an honest assessment of model reliability.

6. Assumption Checking and Diagnostics

Statistical rigor requires a careful evaluation of the assumptions underlying a model. When these assumptions are violated, it can lead to substantial uncertainty in the results, making thorough diagnostics and model refinement critical to minimizing risks and ensuring reliable outcomes.

Key Takeaway: Building a Strong Foundation in Data Science Modeling

Data science modeling is effectively solving challenges in today’s times and enabling demand forecasting, inventory management, and logistical optimization, leading to cost savings and improved efficiency in supply chains.

Incorporating processes like feature engineering, uncertainty estimation, and robust validation ensures that your models are not only reliable but also interpretable and adaptable to real-world complexities.

As artificial intelligence and machine learning continue to advance, we can expect models to become increasingly automated, adaptive, and precise. The future will likely emphasize real-time predictive analytics, empowering industries to anticipate trends, streamline operations, and make informed decisions with enhanced accuracy.

The journey doesn’t end with building models—it’s about using them to transform challenges into opportunities. Ready to dive deeper? You can access expert insights and further your understanding of data science modeling, in the book, Models Demystified, written by OneSix ML Scientist Michael Clark. The print version of the book will be out this year on CRC Press as part of the Data Science Series.

Navigate Future Developments With Data Science Modeling

We can help you get started with expert insights and practical guidance to build and optimize data-driven models for your needs.

Contact Us

Using AI to Extract Insights from Data: A Conversation with Snowflake

Using AI to Extract Insights from Data: A Conversation with Snowflake

Published

February 6, 2025

During Snowflake’s World Tour stop in Chicago, Data Cloud Now anchor Ryan Green sat down with leaders from OneSix. During the conversation, Co-founder and Managing Director Mike Galvin and Senior Manager Ryan Lewis note how Snowflake’s technology has changed the game, allowing them and its customers to focus less on how to build data infrastructure and more on how to extract insights from data, be it via the use of AI or reporting or dashboarding.

Get More from Your Data with Snowflake

As a Premier Snowflake Services Partner, we drive practical business outcomes by harnessing the power of Snowflake AI Data Cloud. Whether you’re starting with Snowflake, migrating from a legacy platform, or looking to leverage AI and ML capabilities, we’re ready to support your journey.

Contact Us

Smarter Forecasting: How ML is Redefining Demand Prediction

Smarter Forecasting: How ML is Redefining Demand Prediction

Written by

Jacob Dink, AI/ML Director

Published

January 23, 2025

AI & Machine Learning
Forecasting & Prediction

Customers today are faced with more choices than ever, prompting businesses to step up their game in a fiercely competitive global market. To thrive, companies must not only provide exceptional value but also anticipate customer demand effectively. Traditional forecasting methods, which often rely heavily on historical data, can hinder growth and scalability.

This is where machine learning-powered demand forecasting and inventory optimization come into play. These advanced techniques enable businesses to predict demand accurately, allocate resources efficiently, adapt to market fluctuations, and foster long-term customer loyalty.

In this post, we’ll explore how leveraging demand forecasting and inventory optimization can streamline operations and why these strategies are essential for any modern business.

How Can Forecasting Unlock Business Agility and Accuracy?

Businesses need to stay ahead of shifting demands and unpredictable market changes. Demand forecasting and optimization empower organizations to predict future needs, align resources, and respond with confidence to evolving conditions.

Leveraging advanced analytics and AI-driven insights can help businesses sift through vast volumes of data, and make the most of the following multifaceted benefits:

Which Machine Learning Approaches Can You Use for Demand Forecasting?

Time-Series Forecasting

Time-series forecasting analyzes sequential historical data to predict future values. Techniques such as Autoregressive Integrated Moving Average (ARIMA) and exponential smoothing are commonly used to identify patterns like trends and seasonality, enabling businesses to anticipate demand fluctuations over time.

By integrating machine learning with time-series analysis, businesses can make informed decisions on inventory management and pricing strategies, ultimately reducing costs associated with overstocking or stockouts.

Regression Analysis

Regression models predict a continuous dependent variable, such as sales volume, based on one or more independent variables, like price or marketing spend.

These models enable businesses to quantify relationships between variables, helping them understand how different factors influence demand and make informed decisions accordingly.

Regression analysis can be enhanced and made more robust and adaptable through the following machine learning techniques:

Neural Networks

Neural networks, particularly deep learning models, can identify complex, non-linear patterns and relationships within data. They can model intricate interactions between variables, making them powerful tools for capturing the multifaceted nature of demand influences.

What’s more, neural networks accurately model complex relationships within the data, enabling them to significantly outperform traditional forecasting methods.

For companies with large datasets across various regions and products, neural networks—such as long short-term memory (LSTM) networks and recurrent neural networks (RNNs)—can improve forecast accuracy and streamline inventory management.

Reinforcement Learning

Reinforcement Learning lets you make sequential decisions that maximize a long-term reward. In demand forecasting, this approach helps you continuously learn from outcomes and optimize strategies thereby improving decision-making processes over time.

Additionally, Reinforcement Learning is an approach that can help you:

Bayesian Analysis

Bayesian models incorporate prior knowledge and update predictions as new data becomes available to estimate the likelihood of various outcomes. This dynamic and flexible forecasting approach operates on the principle of updating beliefs about uncertain parameters through Bayes’ theorem.

Unlike traditional forecasting methods, Bayesian models produce a distribution of possible outcomes and enable businesses to understand the range of potential future demands and the associated risks.

In industries with intermittent demand patterns, Bayesian methods can effectively combine historical knowledge with sparsely observed data to estimate future needs thereby improving inventory management.

Hierarchical Forecasting

Hierarchical forecasting is used in scenarios with nested time series that together add up to a coherent whole. For instance, predicting sales at a national level can be broken down into regions, stores, and individual products. This method ensures consistency across different aggregation levels and leverages data from various hierarchy levels to improve accuracy.

In energy management, hierarchical forecasting can predict consumption patterns across different sources (e.g., solar, wind) and geographical areas, facilitating better grid management and resource allocation.

Forecasting patient admissions or medical supply needs across hospitals and regions is another application where hierarchical forecasting provides consistency across levels.

Hierarchical forecasting example

Multivariate Forecasting

In multivariate forecasting, multiple related time series are modeled together to capture the relationships between them. For instance, forecasting the demand for related product lines simultaneously can provide insights that improve the accuracy of each forecast by considering the interplay between products.

The multivariate forecasting approach also incorporates factors such as promotional activities, competitor pricing, and economic indicators to enable retail and sales forecasting. Moreover, this method considers lead times, supplier reliability, and market trends to optimize inventory levels and production schedules.

As organizations seek data-driven insights for decision-making, implementing multivariate forecasting will be essential for optimizing operations and enhancing competitiveness in dynamic markets.

Hybrid Forecasting

Hybrid forecasting combines multiple forecasting methods to leverage the strengths of each. Integrating different models enables businesses to achieve more robust and accurate predictions, accommodate various data patterns, and mitigate the limitations inherent in single-method approaches.

Retailers can combine historical sales data with promotions, and seasonality, to predict sales.

In healthcare, hybrid forecasting integrates historical usage data with factors such as seasonal illness patterns and demographic changes to optimize inventory levels for medical supplies.

For instance, a hybrid model can use the Autoregressive Integrated Moving Average (ARIMA) method to identify trends while also using a neural network to understand complex, non-linear influences, such as the effects of marketing campaigns.

Hybrid forecasting example

Key Takeaways

Mastering customer demand is no easy feat—it requires precision, insight, and adaptability.

Demand forecasters harness a range of tools, from time series analysis to advanced machine learning models, unlocking unparalleled accuracy and transforming raw data into actionable insights. These innovations empower businesses to not only analyze the resources available but also anticipate customer expectations in the future with confidence.

By adopting demand forecasting and optimization strategies, organizations can thrive in the present and scale and innovate for the future leading to:

Get Started

Stay ahead in today’s dynamic market by leveraging cutting-edge demand forecasting models and optimization strategies. We're here to help you build the strategy and technology you need to tackle your business challenges.

Contact Us

Making AI More Human: The Power of Agentic Systems

Making AI More Human: The Power of Agentic Systems

Written by

Jack Teitel, Sr. AI/ML Scientist

Published

December 13, 2024

AI & Machine Learning
AI Agents & Chatbots
Snowflake

As AI advances, large language models (LLMs) like GPT-4 have amazed us with their ability to generate human-like responses. But what happens when a task requires more than just straightforward answers? For complex, multi-step workflows, agentic systems represent a promising frontier, offering LLMs the ability to mimic human problem-solving processes more effectively. Let’s explore what agentic systems are, how they work, and why they matter.

What are Agentic Systems?

Agentic systems go beyond traditional one-shot prompting — where you input a single prompt and receive a single response — by introducing structured, multi-step workflows. These systems break down tasks into smaller components, use external tools, and even reflect on their outputs to iteratively improve performance. The goal? Higher-quality responses that can tackle complex tasks more effectively.

Why Traditional LLMs Fall Short

In a basic one-shot prompt scenario, an LLM generates a response token by token, from start to finish. This works well for simple tasks but struggles with:

For example, if you ask a standard LLM to write an essay or debug a piece of code, it might produce a flawed output without recognizing or correcting its mistakes.

One method of correcting these limitations is to use multi-shot prompting, where the user interacts with the LLM, sending multiple prompts. By having a conversation with the LLM, a user can point out mistakes and prompt the LLM to provide better and more refined output. However, this still requires the user to analyze the output, suggest corrections, and interact with the LLM more than just the original prompt, which can be rather time-consuming.

One-Shot Prompting

Multi-Shot Prompting

Categories of Agentic Systems

Agentic systems address these limitations by employing four key strategies:

1. Reflection

Reflection enables an LLM to critique its own output and iteratively improve it. For instance, after generating code, a reflection step allows the model to check for bugs and propose fixes automatically.

Example Workflow:

2. Tool Use

Tool use allows LLMs to call external APIs or perform actions beyond simple token generation (the only action within scope of a traditional LLM). This is essential for tasks requiring access to real-time information via web search or needing to perform specialized functions, such as running unit tests or querying up-to-date pricing.

Example Workflow:

3. Planning

Planning helps LLMs tackle complex tasks by breaking them into smaller, manageable steps before execution. This mirrors how humans approach large problems, such as developing an outline before writing an essay.

Example Workflow:

4. Multi-Agent Systems

Multi-agent systems distribute tasks among specialized agents, each with a defined role (e.g., planner, coder, reviewer). These specialized agents are often different instances of an LLM with varying system prompts to guide their behavior. You can also utilize specialized agents that have been specifically trained to perform different tasks. This approach mirrors teamwork in human organizations and allows each agent to focus on its strengths.

Example Workflow:

Why Agentic Systems Matter

Agentic systems offer several advantages:

Practical Applications of Agentic Systems

Coding Assistance​

In software development, agentic systems can write code, test it, and debug autonomously. For example:

Business and Healthcare

In domains where decision-making requires transparency and reliability, agentic systems excel. By providing clear reasoning and detailed workflows, they can:

Real Time Information Analysis

Many businesses, such as finance, stock trading/analysis, e-commerce and retail, social media and marketing, rely on real-time information as a vital component of their decision-making. For these applications, agentic systems are necessary to extend the knowledgebase of stock LLMs beyond their original training data

Creative Collaboration

From generating marketing campaigns to designing product prototypes, multi-agent systems can simulate entire teams, each agent offering specialized input, such as technical accuracy, customer focus, or business strategy.

Implementing Agentic Systems

Building agentic workflows may sound complex, but tools like LangGraph simplify the process. LangGraph, developed by the creators of LangChain, allows you to define modular agent workflows visually, making it easier to manage interactions between agents. Any code or LLM can act as a node (or agent) in LangGraph.

For example, if working in Snowflake, LangGraph can be combined with Snowflake Cortex to create an agentic workflow leveraging native Snowflake LLMs, RAG systems, and SQL generation, allowing you to build complex agentic workflows in the same ecosystem as more traditional data analytics and management systems while ensuring strict data privacy and security.

For simpler use cases, platforms like LlamaIndex also support agentic capabilities, particularly when integrating data-focused workflows.

The Future of Agentic Systems

As research evolves, agentic systems are expected to remain relevant, even as base LLMs improve. The flexibility of agentic workflows ensures they can be tailored to specific domains, making them a valuable tool for automating complex, real-world tasks. In addition, as base LLMs improve, you can keep your same agentic workflows in place, but swap out the individual agents for the improved LLMs, allowing you to easily improve the overall system performance. In this way, agentic systems not only improve accuracy of traditional LLMs, but can easily scale/adapt to the current rapidly changing LLM ecosystem.

In the words of AI pioneer Andrew Ng, agentic systems represent “the next big thing” in AI. They offer a glimpse into a future where AI doesn’t just respond — it reasons, plans, and iterates like a true digital assistant.

Get Started

Ready to harness the power of Agentic AI? We’ll help you get started with tailored solutions that deliver real results. Contact us today to accelerate your AI journey.

Contact Us

Snowflake Cortex: Bringing ML and AI Solutions to Your Data

Snowflake Cortex: Bringing ML and AI Solutions to Your Data

Written by

Ross Knutson, Manager

Published

May 28, 2024

AI & Machine Learning
Data & App Engineering
Snowflake

Snowflake functionality can be overwhelming. And when you factor in technology partners, marketplace apps, and APIs, the possibilities become seemingly endless. As an experienced Snowflake partner, we understand that customers need help sifting through the possibilities to identify the functionality that will bring them the most value.

Designed to help you digest all that’s possible, our Snowflake Panorama series shines a light on core areas that will ultimately give you a big picture understanding of how Snowflake can help you access and enrich valuable data across the enterprise for innovation and competitive advantage.

What is Snowflake Cortex?

The Snowflake data platform is steadily releasing more and more functionality under its Cortex service. But, what exactly is Cortex?

Cortex isn’t a specific AI feature, but rather an umbrella term for a wide variety of different AI-centric functionality within Snowflake’s data platform. The number of available services under Cortex is growing, and many of its core features are still under private preview and not generally available. 

This blog seeks to break down the full picture of what Cortex can do. It’s focused heavily on what is available today, but also speaks to what’s coming down the road. Without a doubt, we will get a lot more new details on Cortex at Snowflake Data Cloud Summit on June 3-6. By the way, if you’ll be there, let’s meet up to chat all things data and AI.

ML Functions

Before Cortex became Cortex, Snowflake quietly released so-called “ML Powered Functions” which are now rebranded as just Cortex ML Functions. These functions offer an out-of-the-box approach for training and utilizing common machine learning algorithms on your data in the Snowflake Data Cloud.

These ML functions primarily use gradient boosting machines (GBM) as their model training technique, and allow users to simply feed the appropriate parameters into the function to initiate training. After the model is trained, it can be called for inference independently or configured to store results directly into a SQL table.

As of May 2024, there are 4 available ML Functions:

Forecasting

Use this ML function to make predictions about time-series data like revenue, risk management, resource utilization, or demand forecasting.

Anomaly Detection

This function looks to automatically detect outlier data points in a time-series dataset for use-cases like fraud detection, network security monitoring, or quality control.

Contribution Explorer

The Contribution Explorer function aims to rank data points on their impact to a particular output and is best used for use-cases like marketing effectiveness, program effectiveness, or financial performance.

Classification

To train a model that identifies some categorical value, like a customer segmentation, medical diagnosis detection, or a sentiment analysis.

In general, users should remember that these Cortex ML Functions are truly out-of-the-box. In a production state, ML use-cases may require a more custom model architecture. The Snowpark API, and eventually Container Services, allows users to import model files directly to the Snowflake data cloud, when they outgrow the limitations of the Cortex ML functions.

Overall, Cortex’s ML Functions provide a fast way for users to explore and test commonly used machine learning algorithms on their own data, securely within Snowflake.

LLM Functions / Arctic

Earlier this year, Snowflake made their Cortex LLM Functions generally available to select regions. These functions allow users to leverage LLM’s directly within a Snowflake SQL query. In addition, Snowflake also released ‘Arctic’ their open-source language model that is geared towards SQL code generation.

Below, direct from Snowflake documentation, shows how simple it is to call a language model directly within a SELECT statement with Cortex:

				
					SELECT SNOWFLAKE.CORTEX.COMPLETE('snowflake-arctic', 'What are large language models?');

				
			

In the first parameter, we defined the language model we want to use (e.g. ‘snowflake-arctic’), and in the second parameter, we feed our prompt. This basic methodology opens up a ton of possibilities for layering in the power of AI to your data pipelines, reporting/analytics, and ad-hoc research projects. For example, a data engineer could add an LLM function to standardize an free-text field during ETL. An BI developer could automatically synthesize text data from different Snowflake tables into a holistic 2-sentence summary for a weekly report. An analyst could build a lightweight RAG chatbot on Snowflake Streamlit to interrogate a large collection of PDFs.

Arctic

Arctic is Snowflake’s recently released open source LLM. It’s built to perform well in so-called ‘enterprise tasks’ like SQL coding and following instructions. It’s likely that Snowflake wants to position Arctic as the de facto base model for custom LLM business use-cases, particularly those that require fine-tuning.

Even more likely, the Arctic family of models will continue to grow. Document AI, which will give users a UI to extract data from unstructured data files, like a scanned PDF, directly into a structured SQL table. This feature is built on top of the language model ‘Arctic-TILT’.

Other Cortex / Future State

Naturally, Snowflake has joined the world is offering the Snowflake copilot to assist developers while they work with Snowflake through it’s web UI. Universal Search promises to offer an ‘augmeneted analytics’ experience where users can run a query by describing the intended result in natural language. While these features are exciting on their own, they aren’t a major focus for this blog.

Snowflake Streamlit provides a easy way to quickly build simple data applications, integrated with the Snowflake platform. Container Services opens up the possibility of hybrid architectures that leverage Cortex within external business application architectures. The VECTOR data type puts vector embeddings in columns alongside your structured data warehouse data, allowing for techniques like RAG that don’t require a new vector database like Pinecone.

Snowflake Cortex is far from fully materializing as a product, but seeing the foundational building blocks today helps paint a picture of a future data platform that enables companies to quickly and safely build AI tools at scale.

Ready to unlock the full potential of data and AI?

Book a free consultation to learn how OneSix can help drive meaningful business outcomes.

Ensuring AI Excellence: Data Privacy/Security and Model Validation

Ensuring AI Excellence: Data Privacy/Security and Model Validation

Written by

Arturo Chan Yu, Senior Consultant

Published

August 29, 2023

AI & Machine Learning

Artificial Intelligence (AI) has revolutionized the way businesses operate, empowering them with unprecedented capabilities and insights. However, the success of AI models relies on several critical factors, ranging from data privacy and security to validation and testing. In this blog post, we will delve into the essential aspects of building robust AI models. 

Data Privacy and Security

With the increasing reliance on data comes the paramount responsibility of safeguarding its privacy and security. Data privacy and security are two interconnected concepts, each playing a crucial role in protecting sensitive information: 

Data Privacy

Data privacy involves controlling and managing the access, use, and disclosure of personal or sensitive data. It ensures that individuals have the right to know how their data is being collected, processed, and shared and have the option to consent or opt-out. 

Data Security

Data security, on the other hand, focuses on safeguarding data from unauthorized access, breaches, and malicious attacks. It involves implementing technological and procedural measures to protect data confidentiality, integrity, and availability. 

Essential Measures to Protect Sensitive Data

To ensure robust data privacy and security, organizations must adopt a multi-faceted approach that includes the following measures: 

Anonymization Techniques

Anonymization involves removing or modifying personally identifiable information from datasets. Techniques like data masking, tokenization, and generalization ensure that even if the data is accessed, it cannot be traced back to specific individuals.

Encryption

Data encryption transforms sensitive data into an unreadable format using encryption keys. It adds an extra layer of protection, ensuring that even if data is intercepted, it remains unintelligible without the proper decryption key.

Access Controls

Implementing stringent access controls is essential to limit data access to authorized personnel only. Role-based access controls (RBAC) ensure that users can only access the data relevant to their roles and responsibilities. 

Regular Data Backups

Regularly backing up sensitive data is crucial in the event of a cyber-attack or data loss. Backups provide a means to restore data and minimize downtime. 

Employee Training

Employees play a vital role in data security. Regular training on data protection best practices and potential security threats helps in building a security-conscious organizational culture and reduces the risk of human errors. 

Compliance with Data Protection Regulations

Data protection regulations, such as the General Data Protection Regulation (GDPR) in Europe, the California Consumer Privacy Act (CCPA) in the United States, and various other regional laws, impose legal obligations on organizations to protect the privacy and security of personal data. Non-compliance can lead to significant fines and reputational damage. Organizations must proactively adhere to these regulations, which often include requirements for data transparency, consent management, data breach notifications, and data subject rights. 

Validation and Testing

Before deploying AI models into production environments, it is essential to rigorously validate and test their performance. This iterative process not only ensures the models are optimized for accuracy but also addresses potential issues, guaranteeing their effectiveness in delivering valuable insights. Validation and testing serve as a litmus test for AI models, determining whether they can deliver the expected results and perform well under diverse conditions. The main goals of validation and testing are to: 

Assess Model Performance

By validating and testing AI models, data scientists can determine how well the models perform on unseen data. This evaluation is crucial to avoid overfitting (model memorization of the training data) and ensure that the models generalize effectively to new, real-world scenarios. 

Fine-tune the Models

Validation and testing provide valuable feedback that helps data scientists fine-tune the models. By identifying areas of improvement, data scientists can make necessary adjustments and optimize the models for better performance.

Ensure Reliability

Validation and testing help build confidence in the models’ reliability, as they provide evidence of their accuracy and precision. This is especially crucial in critical decision-making processes. 

To measure the performance of AI models during validation and testing, various metrics are employed:

Accuracy

Accuracy measures the proportion of correct predictions made by the model. It provides a broad overview of model performance but may not be suitable for imbalanced datasets.

Precision and Recall

Precision represents the proportion of true positive predictions out of all positive predictions, while recall calculates the proportion of true positive predictions out of all actual positive instances. These metrics are useful for tasks where false positives or false negatives have significant consequences. 

F1 Score

The F1 score is the harmonic mean of precision and recall, providing a balance between the two metrics. It is particularly valuable when dealing with imbalanced datasets.

Area Under the Receiver Operating Characteristic Curve (AUC-ROC)

AUC-ROC measures the model’s ability to distinguish between positive and negative instances, making it an excellent metric for binary classification tasks.

The Roadmap to AI-Ready Data

As AI continues to reshape industries and drive innovation, building robust AI models has become a crucial imperative for organizations. Safeguarding sensitive data and iterating AI models are vital steps in this journey. By prioritizing data privacy and security, validating and testing models effectively, and embracing ongoing data readiness, organizations can harness the full potential of AI.

To help you navigate the complexities of preparing your data for AI, OneSix has authored a comprehensive roadmap to AI-ready data. Our goal is to empower organizations with the knowledge and strategies needed to modernize their data platforms and tools, ensuring that their data is optimized for AI applications. 

Read our step-by-step guide for a deep understanding of the initiatives required to develop a modern data strategy that drives business results.

Get Started

OneSix helps companies build the strategy, technology and teams they need to unlock the power of their data.