ETL

Data engineering jargon getting you down? Here’s the ultimate glossary

Switchboard Apr 26

Blog-Sample1
Table of Contents

    In today’s tech landscape, especially in the world of data, it can be easy to accidentally fall into the “curse of knowledge”, also known as the “curse of expertise”.

    This refers to the idea that an individual (or organization) assumes their audience has the same level of knowledge to enable them to understand a given topic.

    So in the spirit of learning and clearing up any confusion, we’ve put together this quick glossary of common terms.

    Common terms in data engineering

    Data analytics

    For data to be useful it needs to be analyzed. Therefore, data analytics is the field that refers to the science of analyzing data to make conclusions about the information it provides. It’s a tool used for the discovery, interpretation, and communication of meaningful patterns of data.

    Data automation

    By way of using automated technologies such as software programming tools, data automation is the process of uploading, handling, and processing data automatically, instead of doing it manually.

    A common example of how data automation helps many organizations is when a customer creates an online account or registers their contact information via a form (e.g. when signing up for a newsletter). Behind the scenes, data automation will help the organization collect, store, and sort this contact information at scale to enable more effective and personalized customer communications.

    Data engineering

    The concept of data engineering refers to the ability to take huge volumes of complex, disparate raw data and turn it into useful information for data scientists and other teams within an organization. Therefore, the core purpose of data engineering is to build systems that facilitate the collection and usage of data.

    Data lake

    A data lake is a large, centralized repository of unstructured and structured data that is stored in its raw format. It is designed to support advanced analytics and machine learning activities and typically involves storing data from a variety of sources in its native format. You can think of it in a metaphorical sense like the body of water it’s named after – the storage “lake” is the water, and the data inside it like fish and various plants, and rocks (and other things you might find in a lake).

    Data lineage

    In a similar way to how marketers refer to products as having a “lifecycle”, data also has a lifecycle – data lineage is designed to show this cycle from the start to finish of any given data flow. The purpose and process of data lineage is to understand, record, and visualize data as it starts as a source and through to its consumption.

    Data science

    Often used interchangeably with the term ‘data engineering’ or ‘data analytics’, data science uses statistics and data analysis processes to find patterns in data, which can then be used to make predictive tools. Data analytics, or ‘big data’ analytics, then takes over – using the tools built by data scientists to process and interpret new data, thereby producing actionable insights for the benefit of a business.

    Data mesh

    The idea of a data mesh is to eliminate silos commonly found between data teams within an organization. It’s the approach that unifies data and connects sources in such a way that it’s easily accessible by any data consumer within an organization.

    Data quality monitoring

    Ensuring the quality of your data is up to par for usability can be difficult, so it’s important for an organization to implement data quality monitoring. This system allows teams to measure and analyze data, as well as detect, understand, fix, and reduce quality issues that arise and meet business needs.

    Data warehouse

    Unlike a data lake, a data warehouse is a large, centralized repository of structured, pre-processed data that is optimized for reporting and analysis. It is designed to support business intelligence and decision-making activities and typically involves extracting, transforming, and loading data from various sources into a structured format.

    Declarative programming

    Similar to functional data engineering, declarative programming refers to a style of building structure and elements of computer programs that describes what a program does without specifying its control flow. In other words, it’s a way of writing code that describes what you want to do, rather than how you want to do it. Some common declarative programming languages are Prolog, SQL, and embedded SQL.

    ETL

    The acronym “ETL” stands for “Extract, Transform, Load”, a data integration process used to unify data into a central repository such as a data lake or a data warehouse.

    Functional data engineering

    Coming from the concept of “functional programming”, initiated by Maxime Beauchemin (the creator of Apache Airflow and Apache Superset), functional data engineering refers to the idea of “pure tasks” that are declarative and will produce the same results every time they run. Functional data engineering refers to a paradigm that helps to make data engineering more manageable, maintainable, and scalable.

    Imperative programming

    The opposite to declarative programming, imperative programming refers to the style of writing code and building programs that describes how a program should behave in a step-by-step manner. Imperative programming is usually context-sensitive, and can also be described as “algorithmic programming” or more simply “trigger and response”.

    Data engineering automation

    The automation of data engineering involves using tools and techniques such as code generation, configuration management, and deployment automation to create and manage data pipelines and data infrastructures. This can include automating processes such as data ingestion, data transformation, and data loading. By automating data engineering tasks, organizations can increase the efficiency and scalability of their data infrastructures while reducing errors and the need for manual intervention. This, in turn, can help organizations make better use of their data and more data-driven decisions.

    If you need help unifying your first or second-party data, we can help. Contact us to learn how.

    Schedule Demo

    Catch up with the latest from Switchboard

    subscribe

    STAY UPDATED

    Subscribe to our newsletter

    Submit your email, and once a month we'll send you our best time-saving articles, videos and other resources