Jobs
Interviews

393 Dbt Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

0 Lacs

chandigarh

On-site

You should possess a minimum of 7-10 years of industry experience, out of which a minimum of 5 years should have been in machine learning roles. Your proficiency in Python and popular ML libraries such as TensorFlow, PyTorch, and Scikit-learn should be advanced. Furthermore, you should have hands-on experience in distributed training, model optimization including quantization and pruning, and inference at scale. Experience with cloud ML platforms like AWS (SageMaker), GCP (Vertex AI), or Azure ML is essential. It is expected that you are familiar with MLOps tooling such as MLflow, TFX, Airflow, or Kubeflow, and data engineering frameworks like Spark, dbt, or Apache Beam. A solid understanding of CI/CD for ML, model governance, and post-deployment monitoring (e.g., data drift, model decay) is crucial for this role. In addition to technical skills, problem-solving abilities, effective communication, and strong documentation skills are highly valued in this position.,

Posted 21 hours ago

Apply

4.0 - 8.0 years

0 Lacs

delhi

On-site

The ideal candidate should possess extensive expertise in SQL, data modeling, ETL/ELT pipeline development, and cloud-based data platforms like Databricks or Snowflake. You will be responsible for designing scalable data models, managing reliable data workflows, and ensuring the integrity and performance of critical financial datasets. Collaboration with engineering, analytics, product, and compliance teams is a key aspect of this role. Responsibilities: - Design, implement, and maintain logical and physical data models for transactional, analytical, and reporting systems. - Develop and oversee scalable ETL/ELT pipelines to process large volumes of financial transaction data. - Optimize SQL queries, stored procedures, and data transformations for enhanced performance. - Create and manage data orchestration workflows using tools like Airflow, Dagster, or Luigi. - Architect data lakes and warehouses utilizing platforms such as Databricks, Snowflake, BigQuery, or Redshift. - Ensure adherence to data governance, security, and compliance standards (e.g., PCI-DSS, GDPR). - Work closely with data engineers, analysts, and business stakeholders to comprehend data requirements and deliver solutions. - Conduct data profiling, validation, and quality assurance to maintain clean and consistent data. - Maintain comprehensive documentation for data models, pipelines, and architecture. Required Skills & Qualifications: - Proficiency in advanced SQL, including query tuning, indexing, and performance optimization. - Experience in developing ETL/ELT workflows with tools like Spark, dbt, Talend, or Informatica. - Familiarity with data orchestration frameworks such as Airflow, Dagster, Luigi, etc. - Hands-on experience with cloud-based data platforms like Databricks, Snowflake, or similar technologies. - Deep understanding of data warehousing principles like star/snowflake schema, slowly changing dimensions, etc. - Knowledge of cloud services (AWS, GCP, or Azure) and data security best practices. - Strong analytical and problem-solving skills in high-scale environments. Preferred Qualifications: - Exposure to real-time data pipelines like Kafka, Spark Streaming. - Knowledge of data mesh or data fabric architecture paradigms. - Certifications in Snowflake, Databricks, or relevant cloud platforms. - Familiarity with Python or Scala for data engineering tasks.,

Posted 22 hours ago

Apply

0.0 - 4.0 years

0 Lacs

chennai, tamil nadu

On-site

As a candidate for the position, you should possess a minimum educational qualification of 10th, 12th, ITI with Trade Certificate mandatory. We welcome both local and non-local candidates for this opportunity, and the Date of Joining is immediate. The job involves postings in various locations in Tamil Nadu every month. You will be assigned to different plants in Tamil Nadu, with the following requirements: - Plant 1 in Trichy: 400 requirements with accommodation available - Plant 2 in Arakkonam: 300 requirements with accommodation available - Plant 3 in Tiruvottiyur: 200 requirements with no accommodation provided, only own stay The shifts are rotational and structured as follows: - Shift A: 7:00 AM to 3:00 PM - Shift B: 3:00 PM to 11:00 PM - Shift C: 11:00 PM to 7:00 AM Week-offs will also be on a rotational basis. For your convenience, accommodation is available with a deduction of Rs. 1000 per month for 26 working days. Transportation is provided with a deduction of Rs. 500 per month for 26 working days. Additionally, food facility is available during duty hours with a deduction of Rs. 350 per month for 26 working days. Candidates within the age group of 18 to 25 are eligible for this position. The stipend offered is Rs. 15,000/- along with Rs. 1,500/- through DBT and an Attendance Bonus of Rs. 1000/-. Key skills required for this role include possessing a technical certificate (ITI), teamwork abilities, willingness to work in rotational shifts, understanding of transportation logistics, experience in manufacturing environments, understanding of bonus systems, and familiarity with DBT processes.,

Posted 22 hours ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

As an Application Development and Support Engineer with 6 - 10 years of experience, your primary responsibilities will include developing, maintaining, and supporting Python-based applications and automation scripts. You will be tasked with designing, implementing, and optimizing SQL queries and database objects to facilitate application functionality and quant needs. Additionally, you will build and manage ETL pipelines using tools like dbt or Azure Data Factory (ADF) and troubleshoot any application or data pipeline issues promptly to minimize downtime. In this role, you will be expected to take ownership of assigned tasks and drive them to completion with minimal supervision. Continuous improvement of processes and workflows to enhance efficiency and quality will be a key focus. It will also be your responsibility to document solutions, processes, and support procedures clearly and comprehensively. To excel in this position, you should possess technical proficiency in Python programming with experience in scripting and automation. Strong knowledge of SQL for querying and manipulating relational databases is essential, along with hands-on experience in ETL tools like dbt and Airflow. Familiarity with version control systems such as Git, understanding of Agile software development methodologies, and knowledge of containerization and orchestration tools like Docker and Kubernetes are also required. In addition to technical skills, soft skills play a crucial role in this role. You should have excellent problem-solving skills with a proactive approach, a strong sense of ownership, and accountability for your work. The ability to work effectively both independently and as part of a collaborative Agile team is necessary. Good communication skills to articulate questions, concerns, and recommendations, as well as a demonstrated drive to complete tasks efficiently and meet deadlines, will be beneficial. Preferred qualifications for this role include experience with cloud platforms like Azure, familiarity with monitoring and alerting tools for application support, and prior experience in the financial services or fintech domain. If you are an individual who thrives in a dynamic environment, enjoys working with cutting-edge technologies, and is passionate about application development and support, this role could be the perfect fit for you.,

Posted 22 hours ago

Apply

15.0 - 19.0 years

0 Lacs

hyderabad, telangana

On-site

As a Technical Lead / Data Architect, you will play a crucial role in our organization by leveraging your expertise in modern data architectures, cloud platforms, and analytics technologies. In this leadership position, you will be responsible for designing robust data solutions, guiding engineering teams, and ensuring successful project execution in collaboration with the project manager. Your key responsibilities will include architecting and designing end-to-end data solutions across multi-cloud environments such as AWS, Azure, and GCP. You will lead and mentor a team of data engineers, BI developers, and analysts to deliver on complex project deliverables. Additionally, you will define and enforce best practices in data engineering, data warehousing, and business intelligence. You will design scalable data pipelines using tools like Snowflake, dbt, Apache Spark, and Airflow, and act as a technical liaison with clients, providing strategic recommendations and maintaining strong relationships. To be successful in this role, you should have at least 15 years of experience in IT with a focus on data architecture, engineering, and cloud-based analytics. You must have expertise in multi-cloud environments and cloud-native technologies, along with deep knowledge of Snowflake, Data Warehousing, ETL/ELT pipelines, and BI platforms. Strong leadership and mentoring skills are essential, as well as excellent communication and interpersonal abilities to engage with both technical and non-technical stakeholders. In addition to the required qualifications, certifications in major cloud platforms and experience in enterprise data governance, security, and compliance are preferred. Familiarity with AI/ML pipeline integration would be a plus. We offer a collaborative work environment, opportunities to work with cutting-edge technologies and global clients, competitive salary and benefits, and continuous learning and professional development opportunities. Join us in driving innovation and excellence in data architecture and analytics.,

Posted 1 day ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

You will be responsible for leading the delivery of complex solutions by coding larger features from start to finish. Actively participating in planning, performing code and architecture reviews of your team's product will be a crucial aspect of your role. You will help ensure the quality and integrity of the Software Development Life Cycle (SDLC) for your team by identifying opportunities for improvement in how the team works, through the usage of recommended tools and practices. Additionally, you will lead the triage of complex production issues across systems and demonstrate creativity and initiative in solving complex problems. As a high performer, you will consistently deliver a high volume of story points relative to your team. Being aware of the technology landscape, you will plan the delivery of coarse-grained business needs spanning multiple applications. You will also influence technical peers outside your team and set a consistent example of agile development practices. Coaching other engineers to work as a team with Product and UX will be part of your responsibilities. Furthermore, you will create and enhance internal libraries and tools, provide technical leadership on the product, and determine the technical approach. Proactively communicating status and issues to your manager, collaborating with other teams to find creative solutions to customer issues, and showing a commitment to delivery deadlines, especially seasonal and vendor partner deadlines that are critical to Best Buy's continued success, will be essential. Basic Qualifications: - 5+ years of relevant technical professional experience with a bachelor's degree OR equivalent professional experience. - 2+ years of experience with Google Cloud services including Dataflow, Bigquery, Looker. - 1+ years of experience with Adobe Analytics, Content Square, or similar technologies. - Hands-on experience with data engineering and visualization tools like SQL, Airflow, DBT, PowerBI, Tableau, and Looker. - Strong understanding of real-time data processing and issue detection. - Expertise in data architecture, database design, data quality standards/implementation, and data modeling. Preferred Qualifications: - Experience working in an omni-channel retail environment. - Experience connecting technical issues with business performance metrics. - Experience with Forsta or similar customer feedback systems. - Certification in Google Cloud Platform services. - Good understanding of data governance, data privacy laws & regulations, and best practices. About Best Buy: BBY India is a service provider to Best Buy, and as part of the team working on Best Buy projects and initiatives, you will help fulfill Best Buy's purpose to enrich lives through technology. Every day, you will humanize and personalize tech solutions for every stage of life in Best Buy stores, online, and in Best Buy customers" homes. Best Buy is a place where techies can make technology more meaningful in the lives of millions of people, enabling the purpose of enriching lives through technology. The unique culture at Best Buy unleashes the power of its people and provides fast-moving, collaborative, and inclusive experiences that empower employees of all backgrounds to make a difference, learn, and grow every day. Best Buy's culture is built on deeply supporting and valuing its amazing employees and other team members. Best Buy is committed to being a great place to work, where you can unlock unique career possibilities. Above all, Best Buy aims to provide a place where people can bring their full, authentic selves to work now and into the future. Tomorrow works here.,

Posted 1 day ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

The successful candidate will be responsible for developing and maintaining applications using Python, SQL, Reactjs, and Java. You will also be involved in building and managing data pipelines on platforms such as Databricks, DBT, Snowflake, RSL (Report Specification Language-Geneva), and RDL (Report Definition Language). Your experience with non-functional aspects like performance management, scalability, and availability will be crucial. Additionally, you will collaborate closely with front-office, operations, and finance teams to enhance reporting and analysis for alternative investments. Working with cross-functional teams, you will drive automation, workflow efficiencies, and reporting enhancements. Troubleshooting system issues, implementing enhancements, and ensuring optimal system performance to follow the sun model for end-to-end coverage of applications will also be part of your responsibilities. Qualifications & Experience: - Education: Bachelor's degree in Computer Science, Engineering, or a related field. - Experience: Minimum of 2 years of experience in enterprise software development and production management, preferably within financial services. - Proficiency in at least one programming language - Python, Java, Reactjs, SQL. - Familiarity with alternative investments and their reporting requirements. - Hands-on experience in relational databases and complex query authoring. - Ability to thrive in a fast-paced work environment with quick iterations. - Must be able to work out of our Bangalore office. Preferred Qualifications: - Knowledge of AWS/Azure services. - Previous experience in asset management/private equity domain. This role provides an exciting opportunity to work in a fast-paced, engineering-focused startup environment and contribute to meaningful projects that address complex business challenges. Join our team and become part of a culture that values innovation, collaboration, and excellence. FS Investments: 30 years of leadership in private markets FS Investments is an alternative asset manager focused on delivering attractive returns across private equity, private credit, and real estate. With the recent acquisition of Portfolio Advisors in 2023, FS Investments now manages over $85 billion for institutional and wealth management clients globally. With over 30 years of experience and more than 500 employees across nine global offices, the firm's investment professionals oversee a variety of strategies across private markets and maintain relationships with 300+ sponsors. FS Investments" active partnership model fosters superior market insights and deal flow, informing the underwriting process and contributing to strong returns. FS is an Equal Opportunity Employer. FS Investments does not accept unsolicited resumes from recruiters or search firms. Any resume or referral submitted without a signed agreement is the property of FS Investments, and no fee will be paid.,

Posted 1 day ago

Apply

7.0 - 11.0 years

0 Lacs

karnataka

On-site

You are an experienced Senior Data Analyst with a minimum of 7-8 years of experience in data analysis roles, specifically with significant exposure to Snowflake. Your primary responsibilities will include querying and analyzing data stored in Snowflake databases to derive meaningful insights for supporting business decision-making. You will also be responsible for developing and maintaining data models and schema designs within Snowflake to facilitate efficient data analysis. In addition, you will create and maintain data visualizations and dashboards using tools like Tableau or Power BI, leveraging Snowflake as the underlying data source. Collaboration with business stakeholders to understand data requirements and translate them into analytical solutions is a key aspect of this role. You will also perform data validation, quality assurance, and data cleansing activities within Snowflake databases. Furthermore, you will support the implementation and enhancement of ETL processes and data pipelines to ensure data accuracy and completeness. A Bachelor's or Master's degree in Data Science, Statistics, Computer Science, Information Systems, or a related field is required. Certifications in data analytics, data visualization, or cloud platforms are desirable but not mandatory. Your primary skills should encompass a strong proficiency in querying and analyzing data using Snowflake SQL and DBT. You must have a solid understanding of data modeling and schema design within Snowflake environments. Experience in data visualization and reporting tools such as Power BI, Tableau, or Looker is essential for analyzing and presenting insights derived from Snowflake. Familiarity with ETL processes and data pipeline development is also crucial, along with a proven track record of using Snowflake for complex data analysis and reporting tasks. Strong problem-solving and analytical skills, including the ability to derive actionable insights from data, are key requirements. Experience with programming languages like Python or R for data manipulation and analysis is a plus. Secondary skills that would be beneficial for this role include knowledge of cloud platforms and services such as AWS, Azure, or GCP. Excellent communication and presentation skills, strong attention to detail, and a proactive approach to problem-solving are also important. The ability to work collaboratively in a team environment is essential for success in this position. This role is for a Senior Data Analyst specializing in Snowflake, based in either Trivandrum or Bangalore. The working hours are 8 hours per day from 12:00 PM to 9:00 PM, with a few hours of overlap during the EST time zone for mandatory meetings. The close date for applications is 18-04-2025.,

Posted 1 day ago

Apply

3.0 - 7.0 years

0 Lacs

telangana

On-site

You will be joining Teradata, a company that believes in empowering individuals with better information through its cloud analytics and data platform for AI. By providing harmonized data, trusted AI, and faster innovation, Teradata enables customers and their clients to make more informed decisions across various industries. As a part of the team, your responsibilities will include designing, developing, and maintaining scalable enterprise applications, data processing, and engineering pipelines. You will write efficient, scalable, and clean code primarily in Go (Golang), Java, or Python. Collaborating with cross-functional teams, you will define, design, and implement new features while ensuring the availability, reliability, and performance of deployed applications. Integrating with CI/CD pipelines will be crucial for seamless deployment and development cycles. Monitoring and optimizing application performance, troubleshooting issues, evaluating, investigating, and optimizing application performance, as well as resolving customer incidents and supporting Customer Support and Operations teams are also part of your role. You will work with a high-performing engineering team that values innovation, continuous learning, and open communication. The team focuses on mutual respect, empowering members, celebrating diverse perspectives, and fostering professional growth. This Individual Contributor role reports to the Engineering Manager. To be qualified for this role, you should have a Tech/M. Tech/MCA/MSc degree in CSE/IT or related disciplines, along with 3-5 years of relevant industry experience. Expertise in SQL and either Java or Golang is essential, as well as experience with Python, REST API in Linux environments, and working in public cloud environments like AWS, Azure, or Google Cloud. Excellent communication and teamwork skills are also required. Preferred qualifications include experience with containerization (Docker) and orchestration tools (Kubernetes), modern data engineering tools such as Airbyte, Airflow, and dbt, good knowledge of Java/Python and development experience, familiarity with Teradata database, proactive and solution-oriented mindset, passion for technology and continuous learning, ability to work independently while contributing to the team's success, creativity, adaptability, a strong sense of ownership, accountability, and a drive to make an impact. Teradata prioritizes a people-first culture, offering a flexible work model, focusing on well-being, and being an anti-racist company dedicated to fostering a diverse, equitable, and inclusive environment that values individuals for who they are.,

Posted 1 day ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

Job Description: As a Snowflake Admin with 6+ years of experience and the ability to join immediately, you will be responsible for administering and managing Snowflake environments. This includes configuring, ensuring security, and conducting maintenance tasks. Your role will involve monitoring and optimizing Snowflake performance, storage usage, and query efficiency to enhance overall system functionality. In this position, you will be required to implement and manage role-based access control (RBAC) and data security policies to safeguard sensitive information. Additionally, you will set up and oversee data sharing, data replication, and virtual warehouses to support various data operations effectively. You will be expected to automate administrative tasks using SQL, Snowflake CLI, or scripting languages such as Python and Bash. Your proficiency in these tools will be essential in streamlining processes and improving efficiency within the Snowflake environment. Furthermore, providing support for data integration tools and pipelines like Fivetran, dbt, Informatica, and Airflow will be part of your responsibilities. Key Skills: - Snowflake Admin Industry Type: IT/ Computers - Software Functional Area: Not specified Required Education: Bachelor Employment Type: Full Time, Permanent If you are looking for a dynamic opportunity to utilize your expertise in Snowflake administration, apply now with Job Code: GO/JC/668/2025. Join our team and work alongside our Recruiter, Christopher, in a contract hiring role.,

Posted 1 day ago

Apply

5.0 - 9.0 years

0 Lacs

indore, madhya pradesh

On-site

Alphanext is a global talent solutions company with offices in London, Pune, and Indore. We connect top-tier technical talent with forward-thinking organizations to drive innovation and transformation through technology. We are seeking a Senior Data Integration Engineer to take charge of designing, building, and governing scalable, high-performance data pipelines across enterprise systems. The ideal candidate will have extensive experience in data engineering and integration, particularly within manufacturing, retail, and supply chain ecosystems. This role plays a crucial part in ensuring near-real-time data flows, robust data quality, and seamless integration among ERP, WMS, commerce, and finance platforms, thereby enabling AI and analytics capabilities throughout the enterprise. Key Responsibilities: - Designing and maintaining ELT/ETL pipelines that integrate systems such as BlueCherry ERP, Manhattan WMS, and Shopify Plus. - Developing event-driven architectures utilizing Azure Service Bus, Kafka, or Event Hubs for real-time data streaming. - Defining and publishing data contracts and schemas (JSON/Avro) in the enterprise Data Catalog to ensure lineage and governance. - Automating reconciliation processes with workflows that detect discrepancies, raise alerts, and monitor data-quality SLAs. - Leading code reviews, establishing integration playbooks, and providing guidance to onshore/offshore engineering teams. - Collaborating with the Cybersecurity team to implement encryption, PII masking, and audit-compliant data flows. - Facilitating AI and analytics pipelines, including feeds for feature stores and streaming ingestion to support demand forecasting and GenAI use cases. Year-One Deliverables: - Replacement of the existing nightly CSV-based exchange between BlueCherry and WMS with a near-real-time event bus integration. - Launching a unified product master API that feeds PLM, OMS, and e-commerce within 6 months. - Automating three-way reconciliation of PO packing list warehouse receipt to support traceability audits (e.g., BCI cotton). - Deployment of a data quality dashboard with rule-based alerts and SLA tracking metrics. Must-Have Technical Skills: - 5+ years of experience in data engineering or integration-focused roles. - Proficiency with at least two of the following: Azure Data Factory, Databricks, Kafka/Event Hubs, DBT, SQL Server, Logic Apps, Python. - Strong SQL skills and experience with a compiled or scripting language (Python, C#, or Java). - Proven track record of integrating ERP, WMS, PLM, or similar retail/manufacturing systems. - Expertise in data modeling, schema design (JSON/Avro), and schema versioning. - Working knowledge of CI/CD pipelines and infrastructure-as-code using tools like GitHub Actions and Azure DevOps. Qualifications: - Bachelor's degree in Computer Science, Engineering, or a related field (preferred). - Exceptional problem-solving skills, analytical mindset, and attention to data governance. - Strong communication and leadership abilities, with a history of mentoring and collaborating with teams.,

Posted 1 day ago

Apply

7.0 - 11.0 years

0 Lacs

maharashtra

On-site

As a skilled Snowflake Developer with over 7 years of experience, you will be responsible for designing, developing, and optimizing Snowflake data solutions. Your expertise in Snowflake SQL, ETL/ELT pipelines, and cloud data integration will be crucial in building scalable data warehouses, implementing efficient data models, and ensuring high-performance data processing in Snowflake. Your key responsibilities will include: - Designing and developing Snowflake databases, schemas, tables, and views following best practices. - Writing complex SQL queries, stored procedures, and UDFs for data transformation. - Optimizing query performance using clustering, partitioning, and materialized views. - Implementing Snowflake features such as Time Travel, Zero-Copy Cloning, Streams & Tasks. - Building and maintaining ETL/ELT pipelines using Snowflake, Snowpark, Python, or Spark. - Integrating Snowflake with cloud storage (S3, Blob) and data ingestion tools (Snowpipe). - Developing CDC (Change Data Capture) and real-time data processing solutions. - Designing star schema, snowflake schema, and data vault models in Snowflake. - Implementing data sharing, secure views, and dynamic data masking. - Ensuring data quality, consistency, and governance across Snowflake environments. - Monitoring and optimizing Snowflake warehouse performance (scaling, caching, resource usage). - Troubleshooting data pipeline failures, latency issues, and query bottlenecks. - Collaborating with data analysts, BI teams, and business stakeholders to deliver data solutions. - Documenting data flows, architecture, and technical specifications. - Mentoring junior developers on Snowflake best practices. Required Skills & Qualifications: - 7+ years in database development, data warehousing, or ETL. - 4+ years of hands-on Snowflake development experience. - Strong SQL or Python skills for data processing. - Experience with Snowflake utilities (SnowSQL, Snowsight, Snowpark). - Knowledge of cloud platforms (AWS/Azure) and data integration tools (Coalesce, Airflow, DBT). - Certifications: SnowPro Core Certification (preferred). Preferred Skills: - Familiarity with data governance and metadata management. - Familiarity with DBT, Airflow, SSIS & IICS. - Knowledge of CI/CD pipelines (Azure DevOps).,

Posted 1 day ago

Apply

4.0 - 8.0 years

0 Lacs

navi mumbai, maharashtra

On-site

You will be part of a data analytics services company that specializes in creating and managing scalable data platforms for a diverse client base. Leveraging cutting-edge technologies, you will provide actionable insights and value through modern data stack solutions. Your responsibilities will include designing, building, and managing customer data platforms independently using Snowflake, dbt, Fivetran, and SQL. Collaborating with clients and internal teams to gather business requirements and translating them into reliable data solutions will be a key aspect of your role. You will also develop and maintain ELT pipelines with Fivetran and dbt for automating data ingestion, transformation, and delivery. Optimizing SQL code and data models for scalability, performance, and cost efficiency in Snowflake will be crucial. Additionally, ensuring data platform reliability, monitoring, and data quality maintenance will be part of your responsibilities. You will also provide technical mentorship and guidance to junior engineers and maintain comprehensive documentation of engineering processes and architecture. The required skills and qualifications for this role include proven hands-on experience with Snowflake, dbt, Fivetran, and SQL. You should have a strong understanding of data warehousing concepts, ETL/ELT best practices, and modern data stack architectures. Experience in working independently and owning project deliverables end-to-end is essential. Familiarity with version control systems like Git and workflow automation tools, along with solid communication and documentation skills, is necessary. You should also be able to interact directly with clients and understand their business requirements. Preferred skills that would be beneficial for this role include exposure to cloud platforms like AWS, GCP, and Azure, knowledge of Python or other scripting languages for data pipelines, and experience with BI/analytics tools such as Tableau, Power BI, and Looker. In return, you will have the opportunity to lead the implementation of state-of-the-art data platforms for global clients in a dynamic, growth-oriented work environment with flexible working arrangements and a competitive compensation package. If you are interested in this opportunity, please submit your resume and a short cover letter detailing your experience with Snowflake, dbt, Fivetran, and SQL.,

Posted 2 days ago

Apply

10.0 - 14.0 years

0 Lacs

dehradun, uttarakhand

On-site

You should have familiarity with modern storage formats like Parquet and ORC. Your responsibilities will include designing and developing conceptual, logical, and physical data models to support enterprise data initiatives. You will build, maintain, and optimize data models within Databricks Unity Catalog, developing efficient data structures using Delta Lake to optimize performance, scalability, and reusability. Collaboration with data engineers, architects, analysts, and stakeholders is essential to ensure data model alignment with ingestion pipelines and business goals. You will translate business and reporting requirements into a robust data architecture using best practices in data warehousing and Lakehouse design. Additionally, maintaining comprehensive metadata artifacts such as data dictionaries, data lineage, and modeling documentation is crucial. Enforcing and supporting data governance, data quality, and security protocols across data ecosystems will be part of your role. You will continuously evaluate and improve modeling processes. The ideal candidate will have 10+ years of hands-on experience in data modeling in Big Data environments. Expertise in OLTP, OLAP, dimensional modeling, and enterprise data warehouse practices is required. Proficiency in modeling methodologies including Kimball, Inmon, and Data Vault is expected. Hands-on experience with modeling tools like ER/Studio, ERwin, PowerDesigner, SQLDBM, dbt, or Lucidchart is preferred. Proven experience in Databricks with Unity Catalog and Delta Lake is necessary, along with a strong command of SQL and Apache Spark for querying and transformation. Experience with the Azure Data Platform, including Azure Data Factory, Azure Data Lake Storage, Azure Synapse Analytics, and Azure SQL Database is beneficial. Exposure to Azure Purview or similar data cataloging tools is a plus. Strong communication and documentation skills are required, with the ability to work in cross-functional agile environments. Qualifications for this role include a Bachelor's or Master's degree in Computer Science, Information Systems, Data Engineering, or a related field. Certifications such as Microsoft DP-203: Data Engineering on Microsoft Azure are desirable. Experience working in agile/scrum environments and exposure to enterprise data security and regulatory compliance frameworks (e.g., GDPR, HIPAA) are also advantageous.,

Posted 2 days ago

Apply

5.0 - 10.0 years

0 Lacs

karnataka

On-site

Looking for a DBT Developer with 5 to 10 years of experience We invite applications for the role of Lead Consultant, DBT Data Engineer! As a DBT Data Engineer, you will be responsible for providing technical direction and leading a group of one or more developers to achieve a common goal. Your responsibilities will include designing, developing, and automating ETL processes using DBT and AWS. You will be tasked with building robust data pipelines to transfer data from various sources to data warehouses or data lakes. Collaborating with cross-functional teams is crucial to ensure data accuracy, completeness, and consistency. Data cleansing, validation, and transformation are essential to maintain data quality and integrity. Optimizing database and query performance will be part of your responsibilities to ensure efficient data processing. Working closely with data analysts and data scientists, you will provide clean and reliable data for analysis and modeling. Your role will involve writing SQL queries against Snowflake, developing scripts for Extract, Load, and Transform operations. Hands-on experience with Snowflake utilities such as SnowSQL, SnowPipe, Tasks, Streams, Time travel, Cloning, Optimizer, Metadata Manager, data sharing, stored procedures, and UDFs is required. Proficiency with Snowflake cloud data warehouse and AWS S3 bucket or Azure blob storage container for data integration is necessary. Additionally, you should have solid experience in Python/Pyspark integration with Snowflake and cloud services like AWS/Azure. A sound understanding of ETL tools and data integration techniques is vital for this role. You will collaborate with business stakeholders to grasp data requirements and develop ETL solutions accordingly. Strong programming skills in languages like Python, Java, and/or Scala are expected. Experience with big data technologies such as Kafka and cloud computing platforms like AWS is advantageous. Familiarity with database technologies such as SQL, NoSQL, and/or Graph databases is beneficial. Your experience in requirement gathering, analysis, designing, development, and deployment will be valuable. Building data ingestion pipelines and deploying using CI/CD tools like Azure boards, Github, and writing automated test cases are desirable skills. Client-facing project experience and knowledge of Snowflake Best Practices will be beneficial in this role. If you are a skilled DBT Data Engineer with a passion for data management and analytics, we encourage you to apply for this exciting opportunity!,

Posted 2 days ago

Apply

9.0 - 11.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Job Description Qualifications: Overall 9+ years of IT experience Minimum of 5+ years' preferred managing Data Lakehouse environments, Azure Databricks, Snowflake, DBT (Nice to have) specific experience a plus. Hands-on experience with data warehousing, data lake/lakehouse solutions, data pipelines (ELT/ETL), SQL, Spark/PySpark, DBT,. Strong understanding of Data Modelling, SDLC, Agile, and DevOps principles. Bachelors degree in management/computer information systems, computer science, accounting information systems, computer or in a relevant field. Knowledge/Skills: Tools and Technologies: Azure Databricks, Apache Spark, Python, Databricks SQL, Unity Catalog, and Delta Live Tables. Understanding of cluster configuration, compute and storage layers. Expertise with Snowflake Architecture, with experience in design, development, and evolution System integration experience, data extraction, transformation, and quality controls design techniques. Familiarity with data science concepts, as well as MDM, business intelligence, and data warehouse design and implementation techniques. Extensive experience with the medallion architecture data management framework as well as unity catalog. Data modeling and information classification expertise at the enterprise level. Understanding of metamodels, taxonomies and ontologies, as well as of the challenges of applying structured techniques (data modeling) to less-structured sources. Ability to assess rapidly changing technologies and apply them to business needs. Be able to translate the information architecture contribution to business outcomes into simple briefings for use by various data-and-analytics-related roles. About Us Datavail is a leading provider of data management, application development, analytics, and cloud services, with more than 1,000 professionals helping clients build and manage applications and data via a world-class tech-enabled delivery platform and software solutions across all leading technologies. For more than 17 years, Datavail has worked with thousands of companies spanning different industries and sizes, and is an AWS Advanced Tier Consulting Partner, a Microsoft Solutions Partner for Data & AI and Digital & App Innovation (Azure), an Oracle Partner, and a MySQL Partner. About The Team Datavails Data Management Services: Datavails Data Management and Analytics practice is made up of experts who provide a variety of data services including initial consulting and development, designing and building complete data systems, as well as ongoing support and management of database, data warehouse, data lake, data integration, and virtualization and reporting environments. Datavails team is comprised of not just excellent BI & analytics consultants, but great people as well. Datavails data intelligence consultants are experienced, knowledgeable and certified in the best in breed BI and analytics software applications and technologies. We ascertain your business objectives, goals and requirements, assess your environment, and recommend the tools which best fit your unique situation. Our proven methodology can help your project succeed, regardless of stage. With the combination of a proven delivery model and top-notch experience ensures that Datavail will remain the Data Management experts on demand you desire. Datavails flexible and client focused services always add value to your organization. Show more Show less

Posted 2 days ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

You strive to be an essential member of a diverse team of visionaries dedicated to making a lasting impact. Don't pass up this opportunity to collaborate with some of the brightest minds in the field and deliver best-in-class solutions to the industry. As a Senior Lead Data Architect at JPMorgan Chase within the Consumer and Community Banking Data Technology, you are an integral part of a team that works to develop high-quality data architecture solutions for various software applications, platform, and data products. Drive significant business impact and help shape the global target state architecture through your capabilities in multiple data architecture domains. Represents the data architecture team at technical governance bodies and provides feedback regarding proposed improvements regarding data architecture governance practices. Evaluates new and current technologies using existing data architecture standards and frameworks. Regularly provides technical guidance and direction to support the business and its technical teams, contractors, and vendors. Design secure, high-quality, scalable solutions and reviews architecture solutions designed by others. Drives data architecture decisions that impact data product & platform design, application functionality, and technical operations and processes. Serves as a function-wide subject matter expert in one or more areas of focus. Actively contributes to the data engineering community as an advocate of firmwide data frameworks, tools, and practices in the Software Development Life Cycle. Influences peers and project decision-makers to consider the use and application of leading-edge technologies. Advises junior architects and technologists. Required qualifications, capabilities, and skills: - Formal training or certification on software engineering concepts and 5+ years of applied experience. - Advanced knowledge of architecture, applications, and technical processes with considerable in-depth knowledge in data architecture discipline and solutions (e.g., data modeling, native cloud data services, business intelligence, artificial intelligence, machine learning, data domain driven design, etc.). - Practical cloud-based data architecture and deployment experience, preferably AWS. - Practical SQL development experiences in cloud-native relational databases, e.g. Snowflake, Athena, Postgres. - Ability to deliver various types of data models with multiple deployment targets, e.g. conceptual, logical, and physical data models deployed as operational vs. analytical data stores. - Advanced in one or more data engineering disciplines, e.g. streaming, ELT, event processing. - Ability to tackle design and functionality problems independently with little to no oversight. - Ability to evaluate current and emerging technologies to select or recommend the best solutions for the future state data architecture. Preferred qualifications, capabilities, and skills: - Financial services experience, card and banking a big plus. - Practical experience in modern data processing technologies, e.g., Kafka streaming, DBT, Spark, Airflow, etc. - Practical experience in data mesh and/or data lake. - Practical experience in machine learning/AI with Python development a big plus. - Practical experience in graph and semantic technologies, e.g. RDF, LPG, Neo4j, Gremlin. - Knowledge of architecture assessments frameworks, e.g. Architecture Trade-off Analysis.,

Posted 2 days ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

As a Snowflake Data Engineer at our organization, you will play a vital role in designing, developing, and maintaining our data infrastructure. Your responsibilities will include ingesting, transforming, and distributing data using Snowflake and AWS technologies. You will collaborate with various stakeholders to ensure efficient data pipelines and secure data operations. Your key responsibilities will involve designing and implementing data pipelines using Snowflake and AWS technologies. You will leverage tools like SnowSQL, Snowpipe, NiFi, Matillion, and DBT to ingest, transform, and automate data integration processes. Implementing role-based access controls and managing AWS resources will be crucial for ensuring data security and supporting Snowflake operations. Additionally, you will be responsible for optimizing Snowflake queries and data models for performance and scalability. To excel in this role, you should have a strong proficiency in SQL and Python, along with hands-on experience with Snowflake and AWS services. Understanding ETL/ELT tools, data warehousing concepts, and data quality techniques will be essential. Your analytical skills, problem-solving abilities, and excellent communication skills will enable you to collaborate effectively with data analysts, data scientists, and other team members. Preferred skills include experience with data virtualization, machine learning, AI concepts, data governance, and data security best practices. Staying updated with the latest advancements in Snowflake and AWS technologies will be essential for this role. If you are a passionate and experienced Snowflake Data Engineer with 5 to 7 years of experience, we invite you to apply and be a part of our team. This is a full-time position based in Gurgaon, with a hybrid work mode accommodating India, UK, and US work shifts.,

Posted 2 days ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

We empower our people to stay resilient and relevant in a constantly changing world. We are looking for individuals who are always seeking creative ways to grow and learn, individuals who aspire to make a real impact, both now and in the future. If this resonates with you, then you would be a valuable addition to our dynamic international team. We are currently seeking a Senior Software Engineer - Data Engineer (AI Solutions). In this role, you will have the opportunity to: - Design, build, and maintain data pipelines to cater to the requirements of various stakeholders, including software developers, data scientists, analysts, and business teams. - Ensure that the data pipelines are modular, resilient, and optimized for performance and low maintenance. - Collaborate with AI/ML teams to support training, inference, and monitoring needs through structured data delivery. - Implement ETL/ELT workflows for structured, semi-structured, and unstructured data using cloud-native tools. - Work with large-scale data lakes, streaming platforms, and batch processing systems to ingest and transform data. - Establish robust data validation, logging, and monitoring strategies to uphold data quality and lineage. - Optimize data infrastructure for scalability, cost-efficiency, and observability in cloud-based environments. - Ensure adherence to governance policies and data access controls across projects. To excel in this role, you should possess the following qualifications and skills: - A Bachelor's degree in Computer Science, Information Systems, or a related field. - Minimum of 4 years of experience in designing and deploying scalable data pipelines in cloud environments. - Proficiency in Python, SQL, and data manipulation tools and frameworks such as Apache Airflow, Spark, dbt, and Pandas. - Practical experience with data lakes, data warehouses (e.g., Redshift, Snowflake, BigQuery), and streaming platforms (e.g., Kafka, Kinesis). - Strong understanding of data modeling, schema design, and data transformation patterns. - Experience with AWS (Glue, S3, Redshift, Sagemaker) or Azure (Data Factory, Azure ML Studio, Azure Storage). - Familiarity with CI/CD for data pipelines and infrastructure-as-code (e.g., Terraform, CloudFormation). - Exposure to building data solutions that support AI/ML pipelines, including feature stores and real-time data ingestion. - Understanding of observability, data versioning, and pipeline testing tools. - Previous engagement with diverse stakeholders, data requirement gathering, and support for iterative development cycles. - Background or familiarity with the Power, Energy, or Electrification sector is advantageous. - Knowledge of security best practices and data compliance policies for enterprise-grade systems. This position is based in Bangalore, offering you the opportunity to collaborate with teams that impact entire cities, countries, and shape the future. Siemens is a global organization comprising over 312,000 individuals across more than 200 countries. We are committed to equality and encourage applications from diverse backgrounds that mirror the communities we serve. Employment decisions at Siemens are made based on qualifications, merit, and business requirements. Join us with your curiosity and creativity to help shape a better tomorrow. Learn more about Siemens careers at: www.siemens.com/careers Discover the Digital world of Siemens here: www.siemens.com/careers/digitalminds,

Posted 2 days ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

You will be responsible for designing and optimizing scalable data pipeline architectures and supporting analytics needs across cross-functional teams as a Senior Data Engineer in Hyderabad. Your key responsibilities will include designing, building, and maintaining data pipelines using BigQuery, Python, and SQL, optimizing data flow, automating processes, and scaling infrastructure. You will also develop and manage workflows in Airflow/Cloud Composer and Ascend (or similar ETL tools), implement data quality checks and testing strategies, support CI/CD processes, conduct code reviews, and mentor junior engineers, as well as collaborate with QA/business teams and troubleshoot issues across environments. Your core skills should include proficiency in BigQuery, Python, SQL, Airflow/Cloud Composer, Ascend or similar ETL tools, data integration, warehousing, and pipeline orchestration, data quality frameworks, and incremental load strategies. Additionally, you should have strong experience with GCP or AWS serverless data warehouse environments. Preferred skills for this role include experience with DBT for transformation, Collibra for data governance, and working with unstructured datasets. The qualifications required for this position include a minimum of 5 years in data engineering, a graduate degree in CS, Statistics, or a related field, and strong analytical and SQL expertise.,

Posted 3 days ago

Apply

5.0 - 7.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

curatAId is seeking a Senior Snowflake Consultant on behalf of our client, a fast-growing organization focused on data- driven innovation. This role combines snowflake expertise with DevOps, DBT, Airflow t o support the development and operation of a modern, cloud-based enterprise data platform. The ideal candidate will be responsible for building and managing data infrastructure, developing scalable data pipelines, implementing data quality and governance frameworks and automating workflows for operational efficiency. To apply for this position, it is mandatory to register on our platform at www.curataid.com and give 10 minutes technical quiz on Snowflake skill. Title: Senior Data Engineer Level: Consultant/Deputy Manager/Manager/Senior Manager Relevant Experience: Minimum of 5+ years of hands-on experience on Snowflake with DevOps, DBT, Airflow Must Have Skill: Data Engineering, Snowflake, DBT, Airflow & DevOps Location: Mumbai, Gurgaon, Bengaluru, Chennai, Kolkata, Bhubaneshwar, Coimbatore, Ahmedabad Qualifications 5+ years of relevant snowflake in a data engineering context. (Must Have) 4+ years of relevant experience in DBT, Airflow & DevOps . (Must Have) Strong hands-on experience with data modelling, data warehousing and building high-volume ETL/ELT pipelines. Must have experience with Cloud Data Warehouses like Snowflake, Amazon Redshift, Google Big Query or Azure Synapse Experience with version control systems (GitHub, BitBucket, GitLab). Strong SQL expertise. Implement best practices for data storage management, security, and retrieval efficiency. Experience with pipeline orchestration tools (Fivetran, Stitch, Airflow, etc.). Coding proficiency in at least one modern programming language (Python, Java, Scala, etc.). Show more Show less

Posted 3 days ago

Apply

7.0 - 9.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Bangalore/Gurugram/Hyderabad YOE - 7+ years We are seeking a talented Data Engineer with strong expertise in Databricks, specifically in Unity Catalog, PySpark, and SQL, to join our data team. Youll play a key role in building secure, scalable data pipelines and implementing robust data governance strategies using Unity Catalog. Key Responsibilities: Design and implement ETL/ELT pipelines using Databricks and PySpark. Work with Unity Catalog to manage data governance, access controls, lineage, and auditing across data assets. Develop high-performance SQL queries and optimize Spark jobs. Collaborate with data scientists, analysts, and business stakeholders to understand data needs. Ensure data quality and compliance across all stages of the data lifecycle. Implement best practices for data security and lineage within the Databricks ecosystem. Participate in CI/CD, version control, and testing practices for data pipelines Required Skills: Proven experience with Databricks and Unity Catalog (data permissions, lineage, audits). Strong hands-on skills with PySpark and Spark SQL. Solid experience writing and optimizing complex SQL queries. Familiarity with Delta Lake, data lakehouse architecture, and data partitioning. Experience with cloud platforms like Azure or AWS. Understanding of data governance, RBAC, and data security standards. Preferred Qualifications: Databricks Certified Data Engineer Associate or Professional. Experience with tools like Airflow, Git, Azure Data Factory, or dbt. Exposure to streaming data and real-time processing. Knowledge of DevOps practices for data engineering. Show more Show less

Posted 3 days ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

Join Zendesk as a Data Engineering Manager and lead a team of data engineers who deliver meticulously curated data assets to fuel business insights. Collaborate with Product Managers, Data Scientists, and Data Analysts to drive successful implementation of data products. We are seeking a leader with advanced skills in data infrastructure, data warehousing, and data architecture, as well as a proven track record of scaling BI teams. Be a part of our mission to embrace data and analytics and create a meaningful impact within our organization. You will foster the growth and development of a team of data engineers, design, build, and launch new data models and pipelines in production, and act as a player-coach to amplify the effects of your team's work. Foster connections with diverse teams to comprehend data requirements, help develop and support your team in technical architecture, project management, and product knowledge. Define processes for operational excellence in project management and system reliability and set direction for the team to anticipate strategic and scaling-related challenges. Foster a healthy and collaborative culture that embodies our values. What You Bring to the Role: - Bachelor's degree in Computer Science/Engineering or related field. - 7+ years of proven experience in Data Engineering and Data Warehousing. - 3+ years as a manager of data engineering teams. - Proficiency with SQL & any programming language (Python/Ruby). - Experience with Snowflake, BigQuery, Airflow, dbt. - Familiarity with BI Tools (Looker, Tableau) is desirable. - Proficiency in modern data stack and architectural strategies. - Excellent written and oral communication skills. - Proven track record of coaching/mentoring individual contributors and fostering a culture valuing diversity. - Experience leading SDLC and SCRUM/Agile delivery teams. - Experience working with globally distributed teams preferred. Tech Stack: - SQL - Python/Ruby - Snowflake - BigQuery - Airflow - dbt Please note that this position requires physical location in and working from Pune, Maharashtra, India. Zendesk software was built to bring calm to the chaotic world of customer service. We advocate for digital-first customer experiences and strive to create a fulfilling and inclusive workplace experience. Our hybrid working model allows for connection, collaboration, and learning in person at our offices globally, as well as remote work flexibility. Zendesk endeavors to make reasonable accommodations for applicants with disabilities and disabled veterans. If you require an accommodation to participate in the hiring process, please email peopleandplaces@zendesk.com with your specific request.,

Posted 3 days ago

Apply

10.0 - 12.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About McDonalds: One of the worlds largest employers with locations in more than 100 countries, McDonalds Corporation has corporate opportunities in Hyderabad. Our global offices serve as dynamic innovation and operations hubs, designed to expand McDonald&aposs global talent base and in-house expertise. Our new office in Hyderabad will bring together knowledge across business, technology, analytics, and AI, accelerating our ability to deliver impactful solutions for the business and our customers across the globe. Position Summary: Senior Manager, Integrated Test Lead Data Product Engineering & Delivery (Sr Manager, Technology Testing) Lead comprehensive testing strategy and execution for complex data engineering pipelines and product delivery initiatives. Drive quality assurance across integrated systems, data workflows, and customer-facing applications while coordinating cross-functional testing efforts. Who we are looking for: Primary Responsibilities: Test Strategy & Leadership: Design and implement end-to-end testing frameworks for data pipelines, ETL / ELT processes, and analytics platforms Ensure test coverage across ETL / ELT, data transformation, lineage and consumption layers Develop integrated testing strategies spanning multiple systems, APIs, and data sources Establish testing standards, methodologies, and best practices across the organization Data Engineering Testing: Create comprehensive test suites for data ingestion, transformation, and output validation Design data quality checks, schema validation, and performance testing for large-scale datasets Implement automated testing for streaming and batch data processing workflows Validate data integrity across multiple environments and systems and against business rules Cross-Functional Coordination: Collaborate with data engineers, software developers, product managers, and DevOps teams Coordinate testing activities across multiple product streams and release cycles Manage testing dependencies and critical path items in complex delivery timelines Quality Assurance & Process Improvement: Establish metrics and KPIs for testing effectiveness and product quality to drive continuous improvement in testing processes and tooling Lead root cause analysis for production issues and testing gaps Technical Leadership: Mentor junior QA engineers and promote testing best practices Evaluate and implement new testing tools and technologies Design scalable testing infrastructure and CI/CD integration Skill: 10+ years in software testing with 3+ years in leadership roles 8+ year experience testing data engineering systems, ETL pipelines, or analytics platforms Proven track record with complex, multi-system integration testing Experience in agile/scrum environments with rapid delivery cycles Strong SQL experience with major databases (Redshift, Bigquery, etc.) Experience with cloud platforms (AWS, GCP) and their data services Knowledge of data pipeline tools (Apache Airflow, Kafka, Confluent, Spark, dbt, etc.) Proficiency in data warehousing, data architecture, reporting and analytics applications Scripting languages (Python, Java, bash) for test automation API testing tools and methodologies CI/CD/CT tools and practices Strong project management and organizational skills Excellent verbal and written communication abilities Experience managing multiple priorities and competing deadlines Work location: Hyderabad, India Work pattern: Full time role. Work mode: Hybrid. Show more Show less

Posted 3 days ago

Apply

8.0 - 13.0 years

0 - 0 Lacs

pune, hyderabad, mumbai city

On-site

Position Overview We are seeking a highly skilled and experienced Senior Snowflake Developer/Lead to join our dynamic team. This role is ideal for individuals who are passionate about data engineering and analytics, and who thrive in a collaborative environment. As a Senior Snowflake Developer, you will play a pivotal role in designing, developing, and implementing data solutions that drive business insights and decision-making. With an annual salary of 20,00,000 , this full-time position offers an exciting opportunity to work in a fast-paced environment in one of our key locations: Pune, Mumbai City, or Hyderabad . Key Responsibilities Design and develop scalable data pipelines and ETL processes using Snowflake and other relevant technologies. Collaborate with cross-functional teams to gather requirements and translate them into technical specifications. Optimize and maintain existing data models and workflows to ensure high performance and reliability. Implement best practices for data governance, security, and compliance. Lead and mentor junior developers, providing guidance and support in their professional development. Conduct code reviews and ensure adherence to coding standards and quality assurance processes. Stay updated with the latest industry trends and technologies related to data engineering and analytics. Participate in project planning and estimation activities, ensuring timely delivery of high-quality solutions. Qualifications The ideal candidate will possess the following qualifications: Experience: 8 to 13 years of relevant work experience in data engineering, with a strong focus on Snowflake. Technical Skills: Proficiency in SQL, Python, dbt, and Snowflake. Education: Bachelors or Masters degree in Computer Science, Information Technology, or a related field. Leadership Skills: Proven ability to lead projects and mentor team members effectively. Analytical Skills: Strong problem-solving skills with the ability to analyze complex data sets and derive actionable insights. Communication Skills: Excellent verbal and written communication skills, with the ability to convey technical concepts to non-technical stakeholders. This is a fantastic opportunity for a motivated individual looking to advance their career in a leading organization. If you are ready to take on new challenges and make a significant impact, we encourage you to apply!

Posted 3 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies