Jobs
Interviews

315 Data Ingestion Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

0 Lacs

noida, uttar pradesh

On-site

As a Data Engineer at GlobalLogic, you will be responsible for architecting, building, and maintaining complex ETL/ELT pipelines for batch and real-time data processing using various tools and programming languages. Your key duties will include optimizing existing data pipelines for performance, cost-effectiveness, and reliability, as well as implementing data quality checks, monitoring, and alerting mechanisms to ensure data integrity. Additionally, you will play a crucial role in ensuring data security, privacy, and compliance with relevant regulations such as GDPR and local data laws. To excel in this role, you should possess a Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field. Excellent analytical, problem-solving, and critical thinking skills with meticulous attention to detail are essential. Strong communication (written and verbal) and interpersonal skills are also required, along with the ability to collaborate effectively with cross-functional teams. Experience with Agile/Scrum development methodologies is considered a plus. Your responsibilities will involve providing technical leadership and architecture by designing and implementing robust, scalable, and efficient data architectures that align with organizational strategy and future growth. You will define and enforce data engineering best practices, evaluate and recommend new technologies, and oversee the end-to-end data development lifecycle. As a leader, you will mentor and guide a team of data engineers, conduct code reviews, provide feedback, and promote a culture of engineering excellence. You will collaborate closely with data scientists, data analysts, software engineers, and business stakeholders to understand data requirements and translate them into technical solutions. Your role will also involve communicating complex technical concepts and data strategies effectively to both technical and non-technical audiences. At GlobalLogic, we offer a culture of caring, continuous learning and development opportunities, interesting and meaningful work, balance and flexibility, and a high-trust environment. By joining our team, you will have the chance to work on impactful projects, engage your curiosity and problem-solving skills, and contribute to shaping cutting-edge solutions that redefine industries. With a commitment to integrity and trust, GlobalLogic provides a safe, reliable, and ethical global environment where you can thrive both personally and professionally.,

Posted 1 day ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

As a dynamic global technology company, Schaeffler's success stems from its entrepreneurial spirit and long history of private ownership. Partnering with major automobile manufacturers, as well as key players in the aerospace and industrial sectors, we offer numerous development opportunities globally. Your key responsibilities include developing data pipelines and utilizing methods and tools to collect, store, process, and analyze complex data sets for assigned operations or functions. You will design, govern, build, and operate solutions for large-scale data architectures and applications across businesses and functions. Additionally, you will manage and work hands-on with big data tools and frameworks, implement ETL tools and processes, data virtualization, and federation services. Engineering data integration pipelines and reusable data services using cross-functional data models, semantic technologies, and data integration solutions is also part of your role. You will define, implement, and apply data governance policies for all data flows of data architectures, focusing on the digital platform and data lake. Furthermore, you will define and implement policies for data ingestion, retention, lineage, access, data service API management, and usage in collaboration with data management and IT functions. To qualify for this position, you should hold a Graduate Degree in Computer Science, Applied Computer Science, or Software Engineering with 3 to 5 years of relevant experience. Emphasizing respect and valuing diverse ideas and perspectives among our global workforce is essential to us. By fostering creativity through appreciating differences, we drive innovation and contribute to sustainable value creation for our stakeholders and society as a whole. Together, we are shaping the future with innovation, offering exciting assignments and outstanding development opportunities. We eagerly anticipate your application. For technical inquiries, please contact the following email address: technical-recruiting-support-AP@schaeffler.com. For more information and to apply, visit www.schaeffler.com/careers.,

Posted 1 day ago

Apply

5.0 - 9.0 years

0 Lacs

kolkata, west bengal

On-site

As a Data Strategy Lead within our Retail/FMCG/Manufacturing/Energy and Distribution/Publishing organization, you will be responsible for developing and implementing data strategies that align with the business objectives. Your key responsibilities will include collaborating with stakeholders from various departments to identify data needs, integrating data from multiple sources, implementing automation solutions for streamlined processes, and leading data analysis efforts to identify opportunities for improvement. You will provide insights and recommendations based on data analysis to support our organization's strategy and decision-making processes. Additionally, you will ensure compliance with data privacy regulations and industry standards, define and track key performance indicators for operations, utilize predictive modeling techniques, and work closely with the IT team to ensure data integrity and security. To be successful in this role, you should hold a Bachelor's degree in Computer Science, Information Systems, Statistics, Data Management, Business Administration, or a related field. A master's degree is preferred. You should have experience playing a lead role in at least 3 BI and Analytics implementation projects and possess a strong understanding of Retail/FMCG/Manufacturing/Energy and Distribution/Publishing operations. Proficiency in data lake technologies such as AWS / Google / Snowflake, as well as data analysis tools like SQL, Python, R, SAS, Power BI, Tableau, or similar is required. Experience with data integration, ETL processes, data warehouse concepts, and knowledge of advanced analytics skill sets like Machine Learning and AI will be advantageous. Strong communication and collaboration skills are essential, along with excellent analytical and problem-solving abilities. You should also have knowledge of data privacy regulations and compliance requirements relevant to the Retail/FMCG/Manufacturing/Energy and Distribution/Publishing industry. Certifications in data management, business intelligence, or related areas will be a plus. In summary, as our Data Strategy Lead, you will play a crucial role in driving data-driven decision-making processes, optimizing operations, and enhancing overall business performance within our Retail/FMCG/Manufacturing/Energy and Distribution/Publishing organization.,

Posted 1 day ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

As a Salesforce Data Cloud (CDP) professional, you will be responsible for designing, developing, and deploying solutions on the Salesforce Data Cloud platform. Your role will involve CDP implementation, collaborating with stakeholders to gather requirements, and translating them into technical specifications. You will build custom applications, integrations, and data pipelines using Salesforce Data Cloud tools and technologies. In this position, you will develop and optimize data models to support business processes and reporting needs. Data governance and security best practices implementation will be crucial to ensure data integrity and compliance. Troubleshooting, debugging, and performance tuning of Salesforce Data Cloud solutions will also be part of your responsibilities. It is essential to stay current with Salesforce Data Cloud updates, best practices, and industry trends in order to provide technical guidance and support to other team members and end-users. Documenting solution designs, configurations, and customizations will be required as well. To qualify for this role, you must have a Bachelor's degree in computer science, Information Technology, or a related field. Additionally, you should hold SFDC certification and possess 3 to 6 years of experience in software development with a focus on the Salesforce platform. A strong understanding of relational databases, SQL, and data modeling concepts is necessary. Familiarity with data governance principles and practices, excellent problem-solving skills, and effective communication and collaboration abilities are also essential for success in this position.,

Posted 1 day ago

Apply

3.0 - 7.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Job Description: Business Title Data Engineer Years of Experience Min 3 and max upto 7. Job Description A Software Engineer is curious and self-driven to build and maintain multi-terabyte operational marketing databases and integrate them with cloud technologies. Our databases typically house millions of individuals and billions of transactions and interact with various web services and cloud-based platforms. Once hired, the qualified candidate will be immersed in the development and maintenance of multiple database solutions to meet global client business objectives. Must Have Skills Experience in cloud computing, particularly with one or more of the following platforms: AWS, Azure (preferred), or GCP. Proficiency in Snowflake and Oracle as databases, along with strong SQL skills. Familiarity with ETL tools, with a preference for Informatica. Experience with PySpark and either Python or UNIX shell scripting. Knowledge of workflow orchestration tools such as Tivoli, Tidal, or Stonebranch. A solid understanding of relational and non-relational databases, including when to use each type. Practical knowledge of data structures, databases, data warehousing, data marts, data modeling, and data ingestion and transformation processes. Experience in data warehousing. Strong debugging skills. Familiarity with version control systems (SVN) and project management tools (JIRA). Excellent communication skills. Good To Have Skills Working experience on Agile framework Experience in client communication and exposure to Client Interaction Cross functional team work internally and with external clients Working as a IC Understand the design / task and develop the code. Perform mid to complex level tasks with minimal supervision Documentation Less supervision & guidance from senior resources will be required. Education Qualification Bachelor's or Master Degree. Certification If Any Any Basic level certification in AWS / AZURE / GCP. Azure Preferred Snowflake Associate / Core Shift timing 12 PM to 9 PM and / or 2 PM to 11 PM - IST time zone Location: DGS India - Mumbai - Thane Ashar IT Park Brand: Merkle Time Type: Full time Contract Type: Permanent Show more Show less

Posted 1 day ago

Apply

3.0 - 8.0 years

13 - 18 Lacs

Gurugram

Work from Office

At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career, Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express, How will you make an impact in this role The Digital Data Strategy Team within the broader EDEA (Enterprise Digital Experimentation & Analytics) in EDDS supports all other EDEA VP teams and product & marketing partner teams with data strategy, automation & insights and creates and manages automated insight packs and multiple derived data layers The team partners with Technology to enable end to end MIS Automation, ODL(Organized Data Layer) creation, drives process automation, optimization, Data & MIS Quality in an efficient manner The team also supports strategic Data & Platform initiatives, This role will report to the Director Digital Data Strategy, EDEA and will be based in Gurgaon The candidate will be responsible for delivery of high impactful data and automated insights products to enable other analytics partners, marketing partners and product owners to optimize across our platform, demand generation, acquisition and membership experience domains, Your responsibilities include: Elevate Data Intelligence: Set vision for Intuitive, integrated and intelligent frameworks to enable smart Insights Discover new sources of information for strong enrichment of business applications, Modernization: Keep up with the latest industry research and emerging technologies to ensure we are appropriately leveraging new techniques and capabilities and drive strategic change in tools & capabilities, Develop roadmap to transition our analytical and production usecases to the cloud platform and develop next generation MIS products through modern full stack BI tools & enable self-serve analytics Define digital data strategy vision as the business owner of digital analytics data & partner to achieve the vision of Data as a Service to enable Unified, Scalable & Secure data assets for business applications Strong understanding of key drivers & dynamics of Digital Data, Data Architecture & Design, Data Linkage & Usages In depth knowledge of platforms like Big Data/Cornerstone, Lumi/Google Cloud Platform, Data Ingestion and Organized Data Layers, Being abreast of the latest industry & enterprise wide data governance, data quality practices, privacy policies and engrain the same in all data products & capabilities and be a guiding light for broader team, Partner and collaborate with multiple partners, agency & colleagues to develop Capabilities that will help in maximizing demand generation program ROI, Lead and develop a highly engaged team with a diverse skill-set to deliver automated digital & data solutions Minimum Qualifications 5+ years with relevant experience in the Automation, Data Product Management/Data Strategy with adequate data quality, economies of scale and process governance Proven thought leadership, Solid project management skills, strong communication, collaboration, relationship and conflict management skills Bachelors or Masters degree in Engineering/Management Knowledge of Big Data oriented tools ( e-g Big query, Hive, SQL, Python/R, PySpark); Advanced Excel/VBA and PowerPoint; Experience of managing complex processes and integration with upstream and downstream systems/processes, Hands on experience on visualization tools like Tableau, Power BI, Sisense etc Preferred Qualifications Strong analytical/conceptual thinking competence to solve unstructured and complex business problems and articulate key findings to senior leaders/partners in a succinct and concise manner, Strong understanding of internal platforms like Big Data/Cornerstone, Lumi/Google Cloud Platform, Knowledge of Agile tools and methodologies Enterprise Leadership Behaviors: Set the Agenda: Define What Winning Looks Like, Put Enterprise Thinking First, Lead with an External Perspective Bring Others with You: Build the Best Team, Seek & Provide Coaching Feedback, Make Collaboration Essential Do It the Right Way: Communicate Frequently, Candidly & Clearly, Make Decisions Quickly & Effectively, Live the Blue Box Values, Great Leadership Demands Courage We back you with benefits that support your holistic well-being so you can be and deliver your best This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law, Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations, Show

Posted 2 days ago

Apply

1.0 - 4.0 years

5 - 8 Lacs

Mumbai

Work from Office

We are hiring a Data Engineer to design and manage data pipelines from factory floors to the Azure cloud, supporting our central data lakehouse architecture You'll work closely with OT engineers, architects, and AI teams to move data from edge devices into curated layers (Bronze ? Silver ? Gold), ensuring high data quality, security, and performance Your work will directly enable advanced analytics and AI in production and operations, Key Job functions Build data ingestion and transformation pipelines using Azure Data Factory, IoT Hub, and Databricks 2) Integrate OT sensor data using protocols like OPC-UA and MQTT Design Medallion architecture flows with Delta Lake and Synapse Monitor and optimize data performance and reliabilityImplement data quality, observability, and lineage practices ( e-g , with Purview or Unity Catalog) Collaborate with OT and IT teams to ensure contextualized, usable data Show

Posted 2 days ago

Apply

1.0 - 6.0 years

8 - 13 Lacs

Pune

Work from Office

Cloud Observability Administrator JOB_DESCRIPTION.SHARE.HTML CAROUSEL_PARAGRAPH JOB_DESCRIPTION.SHARE.HTML Pune, India India Enterprise IT - 22685 about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. Cloud Observability Administrator ZS is looking for a Cloud Observability Administrator to join our team in Pune. As a Cloud Observability Administrator, you will be working on configuration of various Observability tools and create solutions to address business problems across multiple client engagements. You will leverage information from requirements-gathering phase and utilize past experience to design a flexible and scalable solution; Collaborate with other team members (involved in the requirements gathering, testing, roll-out and operations phases) to ensure seamless transitions. What Youll Do: Deploying, managing, and operating scalable, highly available, and fault tolerant Splunk architecture. Onboarding various kinds of log sources like Windows/Linux/Firewalls/Network into Splunk. Developing alerts, dashboards and reports in Splunk. Writing complex SPL queries. Managing and administering a distributed Splunk architecture. Very good knowledge on configuration files used in Splunk for data ingestion and field extraction. Perform regular upgrades of Splunk and relevant Apps/add-ons. Possess a comprehensive understanding of AWS infrastructure, including EC2, EKS, VPC, CloudTrail, Lambda etc. Automation of manual tasks using Shell/PowerShell scripting. Knowledge of Python scripting is a plus. Good knowledge of Linux commands to manage administration of servers. What Youll Bring: 1+ years of experience in Splunk Development & Administration, Bachelor's Degree in CS, EE, or related discipline Strong analytic, problem solving, and programming ability 1-1.5 years of relevant consulting-industry experience working on medium-large scale technology solution delivery engagements; Strong verbal, written and team presentation communication skills Strong verbal and written communication skills with ability to articulate results and issues to internal and client teams Proven ability to work creatively and analytically in a problem-solving environment Ability to work within a virtual global team environment and contribute to the overall timely delivery of multiple projects Knowledge on Observability tools such as Cribl, Datadog, Pagerduty is a plus. Knowledge on AWS Prometheus and Grafana is a plus. Knowledge on APM concepts is a plus. Knowledge on Linux/Python scripting is a plus. Splunk Certification is a plus. Perks & Benefits ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth and professional development. Our robust skills development programs, multiple career progression options and internal mobility paths and collaborative culture empowers you to thrive as an individual and global team member. We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections. Travel Travel is a requirement at ZS for client facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed. Travel provides opportunities to strengthen client relationships, gain diverse experiences, and enhance professional growth by working in different environments and cultures. Considering applying? At ZS, we're building a diverse and inclusive company where people bring their passions to inspire life-changing impact and deliver better outcomes for all. We are most interested in finding the best candidate for the job and recognize the value that candidates with all backgrounds, including non-traditional ones, bring. If you are interested in joining us, we encourage you to apply even if you don't meet 100% of the requirements listed above. ZS is an equal opportunity employer and is committed to providing equal employment and advancement opportunities without regard to any class protected by applicable law. To Complete Your Application Candidates must possess or be able to obtain work authorization for their intended country of employment.An on-line application, including a full set of transcripts (official or unofficial), is required to be considered. NO AGENCY CALLS, PLEASE. Find Out More At

Posted 2 days ago

Apply

5.0 - 10.0 years

25 - 40 Lacs

Gurugram

Work from Office

Job Title: Data Engineer Job Type: Full-time Department: Data Engineering / Data Science Reports To: Data Engineering Manager / Chief Data Officer About the Role: We are looking for a talented Data Engineer to join our team. As a Data Engineer, you will be responsible for designing, building, and maintaining robust data pipelines and systems that process and store large volumes of data. You will collaborate closely with data scientists, analysts, and business stakeholders to deliver high-quality, actionable data solutions. This role requires a strong background in data engineering, database technologies, and cloud platforms, along with the ability to work in an Agile environment to drive data initiatives forward. Responsibilities: Design, build, and maintain scalable and efficient data pipelines that move, transform, and store large datasets. Develop and optimize ETL processes using tools such as Apache Spark , Apache Kafka , or AWS Glue . Work with SQL and NoSQL databases to ensure the availability, consistency, and reliability of data. Collaborate with data scientists and analysts to ensure data requirements and quality standards are met. Design and implement data models, schemas, and architectures for data lakes and data warehouses. Automate manual data processes to improve efficiency and data processing speed. Ensure data security, privacy, and compliance with industry standards and regulations. Continuously evaluate and integrate new tools and technologies to enhance data engineering processes. Troubleshoot and resolve data quality and performance issues. Participate in code reviews and contribute to a culture of best practices in data engineering. Requirements: 3-10 years of experience as a Data Engineer or in a similar role. Strong proficiency in SQL and experience with NoSQL databases (e.g., MongoDB, Cassandra). Experience with big data technologies such as Apache Hadoop , Spark , Hive , and Kafka . Hands-on experience with cloud platforms like AWS , Azure , or Google Cloud . Proficiency in Python , Java , or Scala for data processing and scripting. Familiarity with data warehousing concepts, tools, and technologies (e.g., Snowflake , Redshift , BigQuery ). Experience working with data modeling, data lakes, and data pipelines. Solid understanding of data governance, data privacy, and security best practices. Strong problem-solving and debugging skills. Ability to work in an Agile development environment. Excellent communication skills and the ability to work cross-functionally.

Posted 3 days ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

We are a brand new Automotive Technology Start-up dedicated to developing driver safety systems tailored for Indian conditions. Our focus is on equipping riders and drivers with advanced rider assistance systems and affordable obstacle-avoidance automotive systems. Leveraging cutting-edge technologies such as Computer Vision, AI/ML, Big Data, Sensor Fusion, Embedded Systems, and IOT, we aim to create smart assistance systems that enhance safety on the road. Our current suite of products includes innovative solutions like the Front Collision Warning System (FCWS), Sleep Driver Alert System, Driver Monitoring System, and Driver Evaluation and Authentication. These products are designed to address critical safety issues and improve the overall driving experience. The Start-up is strongly supported by a $500Mn Group, a prominent automotive systems manufacturer serving global OEMs. With 29 manufacturing facilities spread across 7 states in India and a workforce exceeding 15,000 employees, the Group brings valuable expertise and resources to our venture. We are looking for a skilled AI/ML Engineer to join our early-stage start-up and contribute to the development of our rider safety product. As an AI/ML Engineer, you will be responsible for designing, developing, and integrating computer vision algorithms that play a crucial role in capturing and analyzing the surroundings to provide effective alerts for riders. In this role, you will: - Develop state-of-the-art CNN based computer vision object detectors and classifiers for real-time detection of road objects - Design and implement data ingestion, annotation, and model training pipelines to handle large volumes of video data and images - Create model visualizations, conduct hyperparameter tuning, and leverage data-driven insights to enhance model performance - Optimize models for efficient inference times and deploy them on low-power embedded IoT ARM CPUs - Establish CI/CD tests to evaluate multiple models on the test set effectively The ideal candidate should possess: - A BS or MS degree in Computer Science or Engineering from reputable educational institutions - Experience in building deep-learning object detectors for Computer Vision Applications - Proficiency in popular CNN architectures such as AlexNet, Google Lenet, MobileNet, Darknet, YOLO, SSD, and Resnet - Hands-on experience with libraries and frameworks like Caffe, Tensorflow, Keras, PyTorch, OpenCV, ARM Compute Library, and OpenCL - Knowledge of Transfer Learning and training models with limited data - Strong programming skills in Modern C++14 or above, Python, along with solid understanding of data structures and algorithms - Familiarity with working on small embedded computers, hardware peripherals, Docker Containers, and Linux-flavored operating systems To excel in this role, you can stand out by having: - Prior experience in product development within an early-stage start-up environment - Expertise in deploying scalable ML models for Android/IOS platforms - Noteworthy contributions to Open Source projects or achievements in coding Hackathons - Passion for Raspberry Pi and ARM CPUs - Keen interest in the field of Autonomous Driving Join us in our mission to revolutionize driver safety through innovative technology solutions tailored for Indian roads.,

Posted 3 days ago

Apply

3.0 - 7.0 years

4 - 8 Lacs

Pune

Work from Office

As a data engineer, you will be responsible for delivering data intelligence solutions to our customers all around the globe, based on an innovative product, which provides insights into the performance of their material handling systems. You will be working on implementing and deploying the product as well as designing solutions to fit it to our customer needs. You will work together with an energetic and multidisciplinary team to build end-to-end data ingestion pipelines and implement and deploy dashboards. Your tasks and responsibilities You will design and implement data & dashboarding solutions to maximize customer value. You will deploy and automate the data pipelines and dashboards to enable further project implementation. You embrace working in an international , diverse team, with an open and respectful atmosphere. You leverage data by making it available for other teams within our department as well to enable our platform vision . Communicate and work closely with other groups within Vanderlande and the project team. You enjoy an independent and self-reliant way of working with a proactive style of communication to take ownership to provide the best possible solution. You will be part of an agile team that encourages you to speak up freely about improvements, concerns, and blockages. As part of Scrum methodology , you will independently create stories and participate in the refinement process. You collect feedback and always search for opportunities to improve the existing standardized product. Execute projects from conception through client handover with a positive contribution on technical performance and the organization. You will take the lead in communication with different stakeholders that are involved in the projects that are being deployed. Your p rofile Bachelor's or master's degree in computer science, IT, or equivalent and a minimum of 6 + years of experience building and deploying complex data pipelines and data solutions. Experience developing end to end data pipelines using technologies like Databricks . E xperience with visualization software, preferably Splunk (or else PowerBI , Tableau, or similar). Strong experience with SQL & Python, with hands-on experience in data modeling . Hands-on experience with programming in Python or Java , and proficiency in Test-Driven Development using pytest . Experience with P yspark or S park SQL to deal with distributed data. Experience with d ata s chemas ( e.g. JSON/XML/Avro) . Experience in d eploying services as containers ( e.g. Docker , Podman ) . Experience in w orking with cloud services (preferably with Azure) . Experience with s treaming and/or batch storage ( e.g. Kafka, Oracle) is a plus . Experience in creating API s is a plus . Experience in guiding, motivating and training engineers. Experience in data quality management and monitoring is a plus. Strong communication skills in English. Skilled at breaking down large problems into smaller, manageable parts.

Posted 3 days ago

Apply

5.0 - 10.0 years

25 - 40 Lacs

Bengaluru

Work from Office

Job Title: Data Engineer Job Type: Full-time Department: Data Engineering / Data Science Reports To: Data Engineering Manager / Chief Data Officer About the Role: We are looking for a talented Data Engineer to join our team. As a Data Engineer, you will be responsible for designing, building, and maintaining robust data pipelines and systems that process and store large volumes of data. You will collaborate closely with data scientists, analysts, and business stakeholders to deliver high-quality, actionable data solutions. This role requires a strong background in data engineering, database technologies, and cloud platforms, along with the ability to work in an Agile environment to drive data initiatives forward. Responsibilities: Design, build, and maintain scalable and efficient data pipelines that move, transform, and store large datasets. Develop and optimize ETL processes using tools such as Apache Spark , Apache Kafka , or AWS Glue . Work with SQL and NoSQL databases to ensure the availability, consistency, and reliability of data. Collaborate with data scientists and analysts to ensure data requirements and quality standards are met. Design and implement data models, schemas, and architectures for data lakes and data warehouses. Automate manual data processes to improve efficiency and data processing speed. Ensure data security, privacy, and compliance with industry standards and regulations. Continuously evaluate and integrate new tools and technologies to enhance data engineering processes. Troubleshoot and resolve data quality and performance issues. Participate in code reviews and contribute to a culture of best practices in data engineering. Requirements: 3-10 years of experience as a Data Engineer or in a similar role. Strong proficiency in SQL and experience with NoSQL databases (e.g., MongoDB, Cassandra). Experience with big data technologies such as Apache Hadoop , Spark , Hive , and Kafka . Hands-on experience with cloud platforms like AWS , Azure , or Google Cloud . Proficiency in Python , Java , or Scala for data processing and scripting. Familiarity with data warehousing concepts, tools, and technologies (e.g., Snowflake , Redshift , BigQuery ). Experience working with data modeling, data lakes, and data pipelines. Solid understanding of data governance, data privacy, and security best practices. Strong problem-solving and debugging skills. Ability to work in an Agile development environment. Excellent communication skills and the ability to work cross-functionally.

Posted 3 days ago

Apply

5.0 - 10.0 years

25 - 40 Lacs

Pune

Work from Office

Job Title: Data Engineer Job Type: Full-time Department: Data Engineering / Data Science Reports To: Data Engineering Manager / Chief Data Officer About the Role: We are looking for a talented Data Engineer to join our team. As a Data Engineer, you will be responsible for designing, building, and maintaining robust data pipelines and systems that process and store large volumes of data. You will collaborate closely with data scientists, analysts, and business stakeholders to deliver high-quality, actionable data solutions. This role requires a strong background in data engineering, database technologies, and cloud platforms, along with the ability to work in an Agile environment to drive data initiatives forward. Responsibilities: Design, build, and maintain scalable and efficient data pipelines that move, transform, and store large datasets. Develop and optimize ETL processes using tools such as Apache Spark , Apache Kafka , or AWS Glue . Work with SQL and NoSQL databases to ensure the availability, consistency, and reliability of data. Collaborate with data scientists and analysts to ensure data requirements and quality standards are met. Design and implement data models, schemas, and architectures for data lakes and data warehouses. Automate manual data processes to improve efficiency and data processing speed. Ensure data security, privacy, and compliance with industry standards and regulations. Continuously evaluate and integrate new tools and technologies to enhance data engineering processes. Troubleshoot and resolve data quality and performance issues. Participate in code reviews and contribute to a culture of best practices in data engineering. Requirements: 3-10 years of experience as a Data Engineer or in a similar role. Strong proficiency in SQL and experience with NoSQL databases (e.g., MongoDB, Cassandra). Experience with big data technologies such as Apache Hadoop , Spark , Hive , and Kafka . Hands-on experience with cloud platforms like AWS , Azure , or Google Cloud . Proficiency in Python , Java , or Scala for data processing and scripting. Familiarity with data warehousing concepts, tools, and technologies (e.g., Snowflake , Redshift , BigQuery ). Experience working with data modeling, data lakes, and data pipelines. Solid understanding of data governance, data privacy, and security best practices. Strong problem-solving and debugging skills. Ability to work in an Agile development environment. Excellent communication skills and the ability to work cross-functionally.

Posted 3 days ago

Apply

5.0 - 10.0 years

25 - 40 Lacs

Noida

Work from Office

Job Title: Data Engineer Job Type: Full-time Department: Data Engineering / Data Science Reports To: Data Engineering Manager / Chief Data Officer About the Role: We are looking for a talented Data Engineer to join our team. As a Data Engineer, you will be responsible for designing, building, and maintaining robust data pipelines and systems that process and store large volumes of data. You will collaborate closely with data scientists, analysts, and business stakeholders to deliver high-quality, actionable data solutions. This role requires a strong background in data engineering, database technologies, and cloud platforms, along with the ability to work in an Agile environment to drive data initiatives forward. Responsibilities: Design, build, and maintain scalable and efficient data pipelines that move, transform, and store large datasets. Develop and optimize ETL processes using tools such as Apache Spark , Apache Kafka , or AWS Glue . Work with SQL and NoSQL databases to ensure the availability, consistency, and reliability of data. Collaborate with data scientists and analysts to ensure data requirements and quality standards are met. Design and implement data models, schemas, and architectures for data lakes and data warehouses. Automate manual data processes to improve efficiency and data processing speed. Ensure data security, privacy, and compliance with industry standards and regulations. Continuously evaluate and integrate new tools and technologies to enhance data engineering processes. Troubleshoot and resolve data quality and performance issues. Participate in code reviews and contribute to a culture of best practices in data engineering. Requirements: 3-10 years of experience as a Data Engineer or in a similar role. Strong proficiency in SQL and experience with NoSQL databases (e.g., MongoDB, Cassandra). Experience with big data technologies such as Apache Hadoop , Spark , Hive , and Kafka . Hands-on experience with cloud platforms like AWS , Azure , or Google Cloud . Proficiency in Python , Java , or Scala for data processing and scripting. Familiarity with data warehousing concepts, tools, and technologies (e.g., Snowflake , Redshift , BigQuery ). Experience working with data modeling, data lakes, and data pipelines. Solid understanding of data governance, data privacy, and security best practices. Strong problem-solving and debugging skills. Ability to work in an Agile development environment. Excellent communication skills and the ability to work cross-functionally.

Posted 3 days ago

Apply

5.0 - 10.0 years

0 Lacs

maharashtra

On-site

You are a highly skilled and motivated Lead Data Scientist / Machine Learning Engineer sought to join a team pivotal in the development of a cutting-edge reporting platform. This platform is designed to measure and optimize online marketing campaigns effectively. Your role will involve focusing on data engineering, ML model lifecycle, and cloud-native technologies. You will be responsible for designing, building, and maintaining scalable ELT pipelines, ensuring high data quality, integrity, and governance. Additionally, you will develop and validate predictive and prescriptive ML models to enhance marketing campaign measurement and optimization. Experimenting with different algorithms and leveraging various models will be crucial in driving insights and recommendations. Furthermore, you will deploy and monitor ML models in production and implement CI/CD pipelines for seamless updates and retraining. You will work closely with data analysts, marketing teams, and software engineers to align ML and data solutions with business objectives. Translating complex model insights into actionable business recommendations and presenting findings to stakeholders will also be part of your responsibilities. Qualifications & Skills: Educational Qualifications: - Bachelors or Masters degree in Computer Science, Data Science, Machine Learning, Artificial Intelligence, Statistics, or related field. - Certifications in Google Cloud (Professional Data Engineer, ML Engineer) is a plus. Must-Have Skills: - Experience: 5-10 years with the mentioned skillset & relevant hands-on experience. - Data Engineering: Experience with ETL/ELT pipelines, data ingestion, transformation, and orchestration (Airflow, Dataflow, Composer). - ML Model Development: Strong grasp of statistical modeling, supervised/unsupervised learning, time-series forecasting, and NLP. - Programming: Proficiency in Python (Pandas, NumPy, Scikit-learn, TensorFlow/PyTorch) and SQL for large-scale data processing. - Cloud & Infrastructure: Expertise in GCP (BigQuery, Vertex AI, Dataflow, Pub/Sub, Cloud Storage) or equivalent cloud platforms. - MLOps & Deployment: Hands-on experience with CI/CD pipelines, model monitoring, and version control (MLflow, Kubeflow, Vertex AI, or similar tools). - Data Warehousing & Real-time Processing: Strong knowledge of modern data platforms for batch and streaming data processing. Nice-to-Have Skills: - Experience with Graph ML, reinforcement learning, or causal inference modeling. - Working knowledge of BI tools (Looker, Tableau, Power BI) for integrating ML insights into dashboards. - Familiarity with marketing analytics, attribution modeling, and A/B testing methodologies. - Experience with distributed computing frameworks (Spark, Dask, Ray). Location: - Bengaluru Brand: - Merkle Time Type: - Full time Contract Type: - Permanent,

Posted 5 days ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As an ideal candidate for this role, you will be responsible for designing and architecting scalable Big Data solutions within the Hadoop ecosystem. Your key duties will include leading architecture-level discussions for data platforms and analytics systems, constructing and optimizing data pipelines utilizing PySpark and other distributed computing tools, and transforming business requirements into scalable data models and integration workflows. It will be crucial for you to guarantee the high performance and availability of enterprise-grade data processing systems. Additionally, you will play a vital role in mentoring development teams and offering guidance on best practices and performance tuning. Your must-have skills for this position include architect-level experience with the Big Data ecosystem and enterprise data solutions, proficiency in Hadoop, PySpark, and distributed data processing frameworks, as well as hands-on experience in SQL and data warehousing concepts. A deep understanding of data lake architecture, data ingestion, ETL, and orchestration tools, along with experience in performance optimization and large-scale data handling will be essential. Your problem-solving, design, and analytical skills should be excellent. While not mandatory, it would be beneficial if you have exposure to cloud platforms such as AWS, Azure, or GCP for data solutions, and possess knowledge of data governance, data security, and metadata management. Joining our team will provide you with the opportunity to work on cutting-edge Big Data technologies, gain leadership exposure, and be directly involved in architectural decisions. This role offers stability as a full-time position within a top-tier tech team, ensuring a work-life balance with a 5-day working schedule. (ref:hirist.tech),

Posted 5 days ago

Apply

5.0 - 10.0 years

20 - 25 Lacs

Bengaluru

Hybrid

Job title: Senior Software Engineer Experience: 5- 8 years Primary skills: Python, Spark or Pyspark, DWH ETL. Database: SparkSQL or PostgreSQL Secondary skills: Databricks ( Delta Lake, Delta tables, Unity Catalog) Work Model: Hybrid (Weekly Twice) Cab Facility: Yes Work Timings: 10am to 7pm Interview Process: 3 rounds (3rd round F2F Mandatory) Work Location: Karle Town Tech Park Nagawara, Hebbal Bengaluru 560045 About Business Unit: The Architecture Team plays a pivotal role in the end-to-end design, governance, and strategic direction of product development within Epsilon People Cloud (EPC). As a centre of technical excellence, the team ensures that every product feature is engineered to meet the highest standards of scalability, security, performance, and maintainability. Their responsibilities span across architectural ownership of critical product features, driving techno-product leadership, enforcing architectural governance, and ensuring systems are built with scalability, security, and compliance in mind. They design multi cloud and hybrid cloud solutions that support seamless integration across diverse environments and contribute significantly to interoperability between EPC products and the broader enterprise ecosystem. The team fosters innovation and technical leadership while actively collaborating with key partners to align technology decisions with business goals. Through this, the Architecture Team ensures the delivery of future-ready, enterprise-grade, efficient and performant, secure and resilient platforms that form the backbone of Epsilon People Cloud. Why we are looking for you: You have experience working as a Data Engineer with strong database fundamentals and ETL background. You have experience working in a Data warehouse environment and dealing with data volume in terabytes and above. You have experience working in relation data systems, preferably PostgreSQL and SparkSQL. You have excellent designing and coding skills and can mentor a junior engineer in the team. You have excellent written and verbal communication skills. You are experienced and comfortable working with global clients You work well with teams and are able to work with multiple collaborators including clients, vendors and delivery teams. You are proficient with bug tracking and test management toolsets to support development processes such as CI/CD. What you will enjoy in this role: As part of the Epsilon Technology practice, the pace of the work matches the fast-evolving demands in the industry. You will get to work on the latest tools and technology and deal with data of petabyte-scale. Work on homegrown frameworks on Spark and Airflow etc. Exposure to Digital Marketing Domain where Epsilon is a marker leader. Understand and work closely with consumer data across different segments that will eventually provide insights into consumer behaviour's and patterns to design digital Ad strategies. As part of the dynamic team, you will have opportunities to innovate and put your recommendations forward. Using existing standard methodologies and defining as per evolving industry standards. Opportunity to work with Business, System and Delivery to build a solid foundation on Digital Marketing Domain. The open and transparent environment that values innovation and efficiency Click here to view how Epsilon transforms marketing with 1 View, 1 Vision and 1 Voice. What will you do? Develop a deep understanding of the business context under which your team operates and present feature recommendations in an agile working environment. Lead, design and code solutions on and off database for ensuring application access to enable data-driven decision making for the company's multi-faceted ad serving operations. Working closely with Engineering resources across the globe to ensure enterprise data warehouse solutions and assets are actionable, accessible and evolving in lockstep with the needs of the ever-changing business model. This role requires deep expertise in spark and strong proficiency in ETL, SQL, and modern data engineering practices. Design, develop, and manage ETL/ELT pipelines in Databricks using PySpark/SparkSQL, integrating various data sources to support business operations Lead in the areas of solution design, code development, quality assurance, data modelling, business intelligence. Mentor Junior engineers in the team. Stay abreast of developments in the data world in terms of governance, quality and performance optimization. Able to have effective client meetings, understand deliverables, and drive successful outcomes. Qualifications: Bachelor's Degree in Computer Science or equivalent degree is required. 5 - 8 years of data engineering experience with expertise using Apache Spark and Databases (preferably Databricks) in marketing technologies and data management, and technical understanding in these areas. Monitor and tune Databricks workloads to ensure high performance and scalability, adapting to business needs as required. Solid experience in Basic and Advanced SQL writing and tuning. Experience with Python Solid understanding of CI/CD practices with experience in Git for version control and integration for spark data projects. Good understanding of Disaster Recovery and Business Continuity solutions Experience with scheduling applications with complex interdependencies, preferably Airflow Good experience in working with geographically and culturally diverse teams. Understanding of data management concepts in both traditional relational databases and big data lakehouse solutions such as Apache Hive, AWS Glue or Databricks. Excellent written and verbal communication skills. Ability to handle complex products. Good communication and problem-solving skills, with the ability to manage multiple priorities. Ability to diagnose and solve problems quickly. Diligent, able to multi-task, prioritize and able to quickly change priorities. Good time management. Good to have knowledge of cloud platforms (cloud security) and familiarity with Terraform or other infrastructure-as-code tools. About Epsilon: Epsilon is a global data, technology and services company that powers the marketing and advertising ecosystem. For decades, we have provided marketers from the world's leading brands the data, technology and services they need to engage consumers with 1 View, 1 Vision and 1 Voice. 1 View of their universe of potential buyers. 1 Vision for engaging each individual. And 1 Voice to harmonize engagement across paid, owned and earned channels. Epsilon's comprehensive portfolio of capabilities across our suite of digital media, messaging and loyalty solutions bridge the divide between marketing and advertising technology. We process 400+ billion consumer actions each day using advanced AI and hold many patents of proprietary technology, including real-time modeling languages and consumer privacy advancements. Thanks to the work of every employee, Epsilon has been consistently recognized as industry-leading by Forrester, Adweek and the MRC. Epsilon is a global company with more than 9,000 employees around the world.

Posted 5 days ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Data/AWS Engineer at Waters Global Research, you will be part of a dynamic team focused on researching and developing self-diagnosing, self-healing instruments to enhance the user experience of our customers. By leveraging cutting-edge technologies and innovative solutions, you will play a crucial role in advancing our analytical chemistry instruments that have a direct impact on various fields such as laboratory testing, drug discovery, and food safety. Your primary responsibility will be to develop data pipelines for specialty instrument data and Gen AI processes, train machine learning models for error diagnosis, and automate manual processes to optimize instrument procedures. You will work on projects aimed at interpreting raw data results, cleaning anomalous data, and deploying models in AWS to collect and analyze results effectively. Key Responsibilities: - Build data pipelines in AWS using services like S3, Lambda, IoT core, and EC2. - Create and maintain dashboards to monitor data health and performance. - Containerize models and deploy them in AWS for efficient data processing. - Develop Python data pipelines to handle data frames and matrices, ensuring smooth data ingestion, transformation, and storage. - Collaborate with Machine Learning engineers to evaluate data and models, and present findings to stakeholders. - Mentor and review code of team members to ensure best coding practices and adherence to standards. Qualifications: Required Qualifications: - Bachelor's degree in computer science or related field with 5-8 years of relevant work experience. - Proficiency in AWS services such as S3, EC2, Lambda, and IAM. - Experience with containerization and deployment of code in AWS. - Strong programming skills in Python for OOP and/or functional programming. - Familiarity with Git, BASH, and command prompt. - Ability to drive new capabilities, solutions, and data best practices from technical documentation. - Excellent communication skills to convey results effectively to non-data scientists. Desired Qualifications: - Experience with C#, C++, and .NET considered a plus. What We Offer: - Hybrid role with competitive compensation and great benefits. - Continuous professional development opportunities. - Inclusive environment that encourages contributions from all team members. - Reasonable adjustments to the interview process based on individual needs. Join Waters Corporation, a global leader in specialty measurement, and be part of a team that drives innovation in chromatography, mass spectrometry, and thermal analysis. With a focus on creating business advantages for various industries, including life sciences, materials, and food sciences, we aim to transform healthcare delivery, environmental management, food safety, and water quality. At Waters, we empower our employees to unlock their full potential, learn, grow, and make a tangible impact on human health and well-being. We value collaboration, problem-solving, and innovation to address the challenges of today and tomorrow. Join us to be part of a team that delivers benefits as one and provides insights for a better future.,

Posted 6 days ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

Wipro Limited is a leading technology services and consulting company dedicated to developing innovative solutions that cater to clients" most intricate digital transformation requirements. With a comprehensive range of capabilities in consulting, design, engineering, and operations, we assist clients in achieving their most ambitious goals and establishing sustainable businesses that are future-ready. Our workforce of over 230,000 employees and business partners spread across 65 countries ensures that we fulfill our commitment to helping customers, colleagues, and communities thrive amidst a constantly changing world. As a Databricks Developer at Wipro, you will be expected to possess the following essential skills: - Cloud certification in Azure Data Engineer or related category - Proficiency in Azure Data Factory, Azure Databricks Spark (PySpark or Scala), SQL, Data Ingestion, and Curation - Experience in Semantic Modelling and Optimizing data models to function within Rahona - Familiarity with Azure data ingestion from on-prem sources such as mainframe, SQL server, and Oracle - Proficiency in Sqoop and Hadoop - Ability to use Microsoft Excel for metadata files containing ingestion requirements - Any additional certification in Azure/AWS/GCP and hands-on experience in cloud data engineering - Strong programming skills in Python, Scala, or Java This position is available in multiple locations including Pune, Bangalore, Coimbatore, and Chennai. The mandatory skill set required for this role is DataBricks - Data Engineering. The ideal candidate should have 5-8 years of experience in the field. At Wipro, we are in the process of building a modern organization that is committed to digital transformation. We are seeking individuals who are driven by the concept of reinvention - of themselves, their careers, and their skills. We encourage a culture of continuous evolution within our business and industry, adapting to the changing world around us. Join us in a purpose-driven environment that empowers you to craft your own reinvention. Realize your ambitions at Wipro, where applications from individuals with disabilities are highly encouraged.,

Posted 6 days ago

Apply

3.0 - 5.0 years

3 - 7 Lacs

Gurugram

Work from Office

About your role Expert engineer is a seasoned technology expert who is highly skilled in programming, engineering and problem-solving skills. They can deliver value to business faster and with superlative quality. Their code and designs meet business, technical, non-functional and operational requirements most of the times without defects and incidents. So, if relentless focus and drive towards technical and engineering excellence along with adding value to business excites you, this is absolutely a role for you. If doing technical discussions and whiteboarding with peers excites you and doing pair programming and code reviews adds fuel to your tank, come we are looking for you. Understand system requirements, analyse, design, develop and test the application systems following the defined standards. The candidate is expected to display professional ethics in his/her approach to work and exhibit a high-level ownership within a demanding working environment. About you Essential Skills You have excellent software designing, programming, engineering, and problem-solving skills. Strong experience working on Data Ingestion, Transformation and Distribution using AWS or Snowflake Exposure to SnowSQL, Snowpipe, Role based access controls, ETL ELT tools like Nifi, Matallion DBT Hands on working knowledge around EC2, Lambda, ECS/EKS, DynamoDB, VPCs Familiar with building data pipelines that leverage the full power and best practices of Snowflake as well as how to integrate common technologies that work with Snowflake (code CICD, monitoring, orchestration, data quality, monitoring) Experience with designing, implementing, and overseeing the integration of data systems and ETL processes through Snaplogic Designing Data Ingestion and Orchestration Pipelines using AWS, Control M Establish strategies for data extraction, ingestion, transformation, automation, and consumption. Experience in Data Lake Concepts with Structured, Semi-Structured and Unstructured Data Experience in creating CI/CD Process for Snowflake Experience in strategies for Data Testing, Data Quality, Code Quality, Code Coverage Ability, willingness & openness to experiment evaluate adopt new technologies. Passion for technology, problem solving and team working. Go getter, ability to navigate across roles, functions, business units to collaborate, drive agreements and changes from drawing board to live systems. Lifelong learner who can bring the contemporary practices, technologies, ways of working to the organization. Effective collaborator adept at using all effective modes of communication and collaboration tools. Experience delivering on data related Non-Functional Requirements like- Hands-on experience dealing with large volumes of historical data across markets/geographies. Manipulating, processing, and extracting value from large, disconnected datasets. Building water-tight data quality gateson investment management data Generic handling of standard business scenarios in case of missing data, holidays, out of tolerance errorsetc. Experience and Qualification: B.E./ B.Tech. or M.C.A. in Computer Science from a reputed University Total 7 to 10 years of relevant experience Personal Characteristics Good interpersonal and communication skills. Strong team player Ability to work at a strategic and tactical level. Ability to convey strong messages in a polite but firm manner. Self-motivation is essential, should demonstrate commitment to high quality design and development. Ability to develop & maintain working relationships with several stakeholders. Flexibility and an open attitude to change. Problem solving skills with the ability to think laterally, and to think with a medium term and long-term perspective. Ability to learn and quickly get familiar with a complex business and technology environment.

Posted 6 days ago

Apply

9.0 - 12.0 years

14 - 24 Lacs

Gurugram

Remote

We are looking for an experienced Senior Data Engineer to lead the development of scalable AWS-native data lake pipelines with a strong focus on time series forecasting and upsert-ready architectures. This role requires end-to-end ownership of the data lifecycle, from ingestion to partitioning, versioning, and BI delivery. The ideal candidate must be highly proficient in AWS data services, PySpark, versioned storage formats like Apache Hudi/Iceberg, and must understand the nuances of data quality and observability in large-scale analytics systems. Role & responsibilities Design and implement data lake zoning (Raw Clean Modeled) using Amazon S3, AWS Glue, and Athena. Ingest structured and unstructured datasets including POS, USDA, Circana, and internal sales data. Build versioned and upsert-friendly ETL pipelines using Apache Hudi or Iceberg. Create forecast-ready datasets with lagged, rolling, and trend features for revenue and occupancy modelling. Optimize Athena datasets with partitioning, CTAS queries, and metadata tagging. Implement S3 lifecycle policies, intelligent file partitioning, and audit logging. Build reusable transformation logic using dbt-core or PySpark to support KPIs and time series outputs. Integrate robust data quality checks using custom logs, AWS CloudWatch, or other DQ tooling. Design and manage a forecast feature registry with metrics versioning and traceability. Collaborate with BI and business teams to finalize schema design and deliverables for dashboard consumption. Preferred candidate profile 9-12 years of experience in data engineering. Deep hands-on experience with AWS Glue, Athena, S3, Step Functions, and Glue Data Catalog. Strong command over PySpark, dbt-core, CTAS query optimization, and partition strategies. Working knowledge of Apache Hudi, Iceberg, or Delta Lake for versioned ingestion. Experience in S3 metadata tagging and scalable data lake design patterns. Expertise in feature engineering and forecasting dataset preparation (lags, trends, windows). Proficiency in Git-based workflows (Bitbucket), CI/CD, and deployment automation. Strong understanding of time series KPIs, such as revenue forecasts, occupancy trends, or demand volatility. Data observability best practices including field-level logging, anomaly alerts, and classification tagging. Experience with statistical forecasting frameworks such as Prophet, GluonTS, or related libraries. Familiarity with Superset or Streamlit for QA visualization and UAT reporting. Understanding of macroeconomic datasets (USDA, Circana) and third-party data ingestion. Independent, critical thinker with the ability to design for scale and evolving business logic. Strong communication and collaboration with BI, QA, and business stakeholders. High attention to detail in ensuring data accuracy, quality, and documentation. Comfortable interpreting business-level KPIs and transforming them into technical pipelines.

Posted 6 days ago

Apply

2.0 - 5.0 years

0 - 0 Lacs

Kochi, Coimbatore

Work from Office

Role Summary: We are looking for a Data Engineer who will be responsible for designing and developing scalable data pipelines, managing data staging layers, and integrating multiple data sources through APIs and SQL-based systems. You'll work closely with analytics and development teams to ensure high data quality and availability. Key Responsibilities: Design, build, and maintain robust data pipelines and staging tables. Develop and optimize SQL queries for ETL processes and reporting. Integrate data from diverse APIs and external sources. Ensure data integrity, validation, and version control across systems. Collaborate with data analysts and software engineers to support analytics use cases. Automate data workflows and improve processing efficiency

Posted 6 days ago

Apply

4.0 - 8.0 years

0 Lacs

chennai, tamil nadu

On-site

As an experienced professional with 4+ years in data engineering, you will be responsible for the following: - Strong proficiency in writing complex SQL queries, stored procedures, and performance tuning to ensure efficient data retrieval and manipulation. - Expertise in Azure Data Factory (ADF) for creating pipelines, data flows, and orchestrating data movement within the Azure environment. - Proficiency in SQL Server Integration Services (SSIS) for ETL processes, package creation, and deployment to facilitate seamless data integration. - Knowledge of Azure Synapse Analytics for data warehousing, distributed query execution, and integration with various Azure services. - Familiarity with Jupyter Notebooks or Synapse Notebooks for data exploration and transformation. - Understanding of Azure Blob Storage, Data Lake Storage, and their integration with data pipelines for efficient data storage and retrieval. - Experience in Azure Analysis Services for building and managing semantic models to support business intelligence requirements. - Knowledge of various data ingestion methods including batch processing, real-time streaming, and incremental data loads to ensure timely and accurate data processing. Additional Skills that would be advantageous for this role include: - Experience in integrating Fabric with Power BI, Synapse, and other Azure services to enhance data visualization and analytics capabilities. - Setting up CI/CD pipelines for ETL/ELT processes using tools like Azure DevOps or GitHub Actions to streamline the data pipeline deployment process. - Familiarity with tools like Azure Event Hubs or Stream Analytics for large-scale data ingestion to support real-time data processing needs. This position is based in Chennai, India and there is currently 1 open position available.,

Posted 1 week ago

Apply

10.0 - 15.0 years

0 Lacs

navi mumbai, maharashtra

On-site

As the COE Solution Development Lead at Teradata, you will be a key thought leader responsible for overseeing the detailed design, development, and maintenance of complex data and analytic solutions. Your role will involve utilizing strong technical and project management skills, as well as team building and mentoring capabilities. You will need to have a deep understanding of Teradata's Solutions Strategy, Technology, Data Architecture, and the partner engagement model. Reporting directly to Teradata's Head of Solution COE, you will play a crucial role in leading a team that develops scalable, efficient, and innovative data and analytics solutions to address complex business problems. Your key responsibilities will include leading the end-to-end process of solution development, designing comprehensive solution architectures, ensuring the flexibility for integration of various data sources and platforms, implementing best practices in data analytics solutions, collaborating with senior leadership, and mentoring a team of professionals to foster a culture of innovation and continuous learning. Additionally, you will work towards delivering solutions on time and within budget, facilitating knowledge sharing across teams, and ensuring that data solutions are scalable, secure, and aligned with the organization's overall technological roadmap. You will collaborate with the COE Solutions lead to transform conceptual solutions into detailed designs and lead a team of Data scientists, Solution engineers, Data engineers, and Software engineers. Furthermore, you will work closely with product development, legal, IT, and business teams to ensure seamless integration of data analytics solutions and the protection of related IP. To qualify for this role, you should have a Bachelor's degree in Computer Science, Engineering, Data Science, or a related field, with a preference for an MS or MBA. You should also possess over 15 years of experience in IT, with at least 10 years in data & analytics solution development and 4+ years in a leadership or senior management position. Along with a proven track record in developing data-driven solutions, you should have experience working with cross-functional teams and a strong understanding of emerging trends in data analytics technologies. We believe you will thrive at Teradata due to our people-first culture, flexible work model, focus on well-being, and commitment to Diversity, Equity, and Inclusion. If you are a collaborative, analytical, and innovative professional with excellent communication skills and a passion for data analytics, we invite you to join us in solving business challenges and driving enterprise analytics forward.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

Join our fast-growing data team at the forefront of cloud data architecture and innovation. We are focused on building scalable, secure, and modern data platforms using cutting-edge Snowflake and other modern data stack technologies. If you are passionate about creating high-performance data infrastructure and solving complex data challenges in a cloud-native environment, this opportunity is perfect for you. As a Senior Data Engineer specializing in Snowflake and the modern data stack, your role will involve architecting and implementing enterprise-grade cloud-native data warehousing solutions. This hands-on engineering position offers significant architectural influence, where you will collaborate extensively with dbt, Fivetran, and other modern data tools to create efficient, maintainable, and scalable data pipelines using ELT-first approaches. Your responsibilities will include showcasing technical expertise in Snowflake Mastery, dbt Proficiency, Data Ingestion, SQL & Data Modeling, Cloud Platforms, Orchestration, Programming, and DevOps. Additionally, you will be expected to contribute to Data Management by understanding data governance frameworks, data quality practices, and data visualization tools. Preferred qualifications and certifications include a Bachelor's degree in Computer Science or related field, substantial hands-on experience in data engineering with a focus on cloud data warehousing, and relevant certifications such as Snowflake SnowPro and dbt Analytics Engineering. Your work will revolve around designing and implementing robust data warehouse solutions, architecting ELT pipelines, building automated data ingestion processes, maintaining data transformation workflows, and developing data modeling best practices. You will optimize Snowflake warehouse performance, implement data quality tests and monitoring, build CI/CD pipelines, and collaborate with analytics teams to support self-service data access. Valtech offers an international network of data professionals, continuous development opportunities, and a culture that values freedom and responsibility. We are committed to creating an equitable workplace that supports individuals from diverse backgrounds to thrive, grow, and achieve their goals. If you are ready to push the boundaries of innovation and creativity in a supportive environment, we encourage you to apply and join the Valtech team.,

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies