Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Data Engineer, Data Engineering & Analytics Location: India Department: IT About Company Rapid7 is seeking a Data Engineer, Data Engineering & Analytics to join a high-performing data engineering and reporting team. This role is responsible for participating in the management of a robust Snowflake infrastructure, data modeling in a modern tech stack, and optimizing the company’s Tableau reporting suite, ensuring that all business units have access to timely, accurate, and actionable data. This is a critical position that will help to develop and maintain the data strategy, architecture, and analytics capabilities at Rapid7, driving insights that enable business growth. The ideal candidate will have experience in data engineering, analytics, and business intelligence, with equal amounts of business and technical acumen. About Role Implement data modeling best practices to enhance data accessibility and reporting capabilities. Ensure data integrity, security, and compliance with industry standards and regulations. Document plans and results in user-stories, issues, PRs, the team’s handbook - following the tradition of documentation first! Implement the Corp Data philosophy in everything you do. Craft code that meets our internal standards for style, maintainability, and best practices for a high-scale database environment. Maintain and advocate for these standards through code review. Collaborate with IT and DevOps teams to optimize cloud infrastructure and data governance policies. Manage and enhance the existing Tableau reporting suite, ensuring self-service analytics and actionable insights for stakeholders. Design, develop, and extend DBT code repository to extend the Enterprise Dimensional Warehouse capabilities and infrastructure Develop and maintain a single source of truth for business metrics, ensuring consistency across reporting platforms. Approve data model changes as a Data Team Reviewer and code owner for specific database and data model schemas. Provide data modeling expertise to all Rapid7 teams through code reviews, pairing, and training to help deliver optimal, DRY, and scalable database designs and queries in Snowflake and in Tableau. Research and implement emerging trends in data analytics, visualization, and engineering, bringing innovative solutions to the organization. Align to data governance frameworks, policies, and best practices, in collaboration with existing teams, policies, and governance frameworks. Identify and lead opportunities for new data initiatives, ensuring Rapid7 remains data-driven and insights-powered. What You Bring to the Role Ability to thrive in a fast-paced hybrid organization. Comfort working in a highly agile, intensely iterative environment. Demonstrated capacity to clearly and concisely communicate complex business activities, technical requirements, and recommendations. 2+ years of experience in data engineering, analytics, or business intelligence. 2+ years experience designing, implementing, operating, and extending enterprise dimensional data models. 2+ years experience building reports and dashboards in Tableau and/or other similar data visualization tools. Experience in DBT modeling and understanding modular, performant models. Solid understanding of Snowflake, SQL, and data warehouse management. Understanding of ETL/ELT processes, data pipelines, and cloud-based data architectures. Familiarity with modern data stacks (DBT, Airflow, Fivetran, Matillion, or similar tools). Ability to manage data governance, security, and compliance requirements (SOC 2, GDPR, etc.). A passion for continuous learning, innovation, and leveraging data for business impact.
Posted 3 weeks ago
3.0 years
0 Lacs
Thane, Maharashtra, India
On-site
Company Description Quantanite is a business process outsourcing (BPO) and customer experience (CX) solutions company that helps fast-growing companies and leading global brands to transform and grow. We do this through a collaborative and consultative approach, rethinking business processes and ensuring our clients employ the optimal mix of automation and human intelligence. We’re an ambitious team of professionals spread across four continents and looking to disrupt our industry by delivering seamless customer experiences for our clients, backed up with exceptional results. We have big dreams and are constantly looking for new colleagues to join us who share our values, passion, and appreciation for diversity Job Description We are looking for a Python Backend Engineer with exposure to AI engineering to join our team in building a scalable, cognitive data platform. This platform will crawl and process unstructured data sources, enabling intelligent data extraction and analysis. The ideal candidate will have deep expertise in backend development using FastAPI, RESTful APIs, SQL, and Azure data technologies, with a secondary focus on integrating AI/ML capabilities into the product. Core Responsibilities Design and develop high-performance backend services using Python (FastAPI). Develop RESTful APIs to support data ingestion, transformation, and AI-based feature access. Work closely with DevOps and data engineering teams to integrate backend services with Azure data pipelines and databases. Manage database schemas, write complex SQL queries, and support ETL processes using Python-based tools. Build secure, scalable, and production-ready services following best practices in logging, authentication, and observability. Implement background tasks and async event-driven workflows for data crawling and processing. AI Engineering Contributions : Support integration of AI models (NLP, summarization, information retrieval) within backend APIs. Collaborate with AI team to deploy lightweight inference pipelines using PyTorch, TensorFlow, or ONNX. Participate in training data pipeline design and minor model fine-tuning as needed for business logic. Contribute to the testing, logging, and monitoring of AI agent behavior in production environments. Qualifications 3+ years of experience in Python backend development, with strong experience in FastAPI or equivalent frameworks. Solid understanding of RESTful API design, asynchronous programming, and web application architecture. Proficiency in working with relational databases (e.g., PostgreSQL, MS SQL Server) and Azure cloud services. Experience with ETL workflows, job scheduling, and data pipeline orchestration (Airflow, Prefect, etc.). Exposure to machine learning libraries (e.g., Scikit-learn, Transformers, OpenAI APIs) is a plus. Familiarity with containerization (Docker), CI/CD practices, and performance tuning. A mindset of code quality, scalability, documentation, and collaboration. Additional Information Benefits At Quantanite, we ask a lot of our associates, which is why we give so much in return. In addition to your compensation, our perks include: Dress: Wear anything you like to the office. We want you to feel as comfortable as when working from home. Employee Engagement: Experience our family community and embrace our culture where we bring people together to laugh and celebrate our achievements. Professional development: We love giving back and ensure you have opportunities to grow with us and even travel on occasion. Events: Regular team and organisation-wide get-togethers and events. Value orientation: Everything we do at Quantanite is informed by our Purpose and Values. We Build Better. Together. Future development: At Quantanite, you’ll have a personal development plan to help you improve in the areas you’re looking to develop over the coming years. Your manager will dedicate time and resources to supporting you in getting you to the next level. You’ll also have the opportunity to progress internally. As a fast-growing organization, our teams are growing, and you’ll have the chance to take on more responsibility over time. So, if you’re looking for a career full of purpose and potential, we’d love to hear from you!
Posted 3 weeks ago
6.0 years
0 Lacs
Thane, Maharashtra, India
On-site
About the Company : Blue Star Limited is India’s leading air conditioning and commercial refrigeration company with over eight decades of experience in providing expert cooling solutions. It fulfils the cooling requirements of a large number of corporate, commercial as well as residential customers, as well as offers products such as water purifiers, air purifiers and air coolers. It also provides expertise in allied contracting activities such as electrical, plumbing and fire-fighting services, in order to provide turnkey solutions, apart from the execution of specialised industrial projects. About the Role : The role involves technical expertise in cooling technologies, product development, design of dehumidification systems, troubleshooting, industry trends, training, consultation, and project management. Responsibilities : Technical Expertise: In-depth knowledge of cooling technologies, including air conditioning units, refrigeration systems, heat pumps, and various types of heat exchangers. Understand and apply principles of thermodynamics, fluid mechanics, and heat transfer to cooling systems and heat exchanger designs. In-depth knowledge of airflow control systems and defining new algorithms. Understanding of different dehumidification processes and technologies. System integration of mechanical, electrical, electronic & refrigerant control components. Component selection based on the specification requirement. Product Development: Participate in the design and development of new cooling systems, dehumidification technologies and heat exchangers. Conduct research on emerging technologies and industry trends to incorporate innovative solutions. Collaborate with engineering and design teams to create efficient and cost-effective products. Testing and Evaluation: Develop and implement testing protocols for cooling systems and heat exchangers. Analyse performance metrics such as efficiency, capacity, reliability, and environmental impact. Identify areas for improvement and recommend design modifications based on test results. Troubleshooting and Problem Solving: Provide technical support to resolve complex issues related to cooling systems and heat exchangers. Diagnose problems, recommend solutions, and oversee the implementation of corrective actions. Industry Trends and Innovation: Stay updated with the latest advancements in cooling technology and heat exchanger design. Participate in industry conferences, seminars, and forums to exchange knowledge and gain insights. Evaluate and implement new technologies and best practices to enhance product offerings. Training and Education: Develop training materials and conduct workshops for engineers, technicians, and other professionals. Provide mentorship and guidance to junior team members to ensure knowledge transfer and skill development. Consultation and Advisory Role: Act as a consultant for projects involving cooling technology and heat exchangers. Offer expertise in system design, energy efficiency optimisation, sustainability practices, and cost-effectiveness. Collaborate with standard-making agencies to provide recommendations. Project Management: Manage projects related to cooling systems and heat exchangers, ensuring adherence to timelines, budgets, and resource allocation. Coordinate with cross-functional teams to achieve project objectives. Qualifications : M Tech / PHD in Mechanical or similar fields with 6+ Years of Experience in air conditioning product development
Posted 3 weeks ago
5.0 - 8.0 years
5 - 8 Lacs
Bengaluru
Work from Office
Skills desired: Strong at SQL (Multi pyramid SQL joins) Python skills (FastAPI or flask framework) PySpark Commitment to work in overlapping hours GCP knowledge(BQ, DataProc and Dataflow) Amex experience is preferred(Not Mandatory) Power BI preferred (Not Mandatory) Flask, Pyspark, Python, Sql
Posted 3 weeks ago
6.0 - 9.0 years
8 - 11 Lacs
Hyderabad, Bengaluru, Delhi / NCR
Work from Office
We are seeking a Sr. Data Engineer to join our Data Engineering team within our Enterprise Data Insights organization to build data solutions, design and implement ETL/ELT processes and manage our data platform to enable our cross functional stakeholders. As a part of our Corporate Engineering division, our vision is to spearhead technology and data-led solutions and experiences to drive growth & innovation at scale. The ideal candidate will have a strong Data Engineering background, advanced Python knowledge and experience with cloud services and SQL/NoSQL databases. You will work closely with our cross functional stakeholders in Product, Finance and GTM along with Business and Enterprise Technology teams. As a Senior Data Engineer, you will: Collaborating closely with various stakeholders to prioritize requests, identify improvements, and offer recommendations. Taking the lead in analyzing, designing, and implementing data solutions, which involves constructing and designing data models and ETL processes. Cultivating collaboration with corporate engineering, product teams, and other engineering groups. Leading and mentoring engineering discussions, advocating for best practices. Actively participating in design and code reviews. Accessing and exploring third-party data APIs to determine the data required to meet business needs. Ensuring data quality and integrity across different sources and systems. Managing data pipelines for both analytics and operational purposes. Continuously enhancing processes and policies to improve SLA and SOX compliance. You'll be a great addition to the team if you have: Hold a B.S., M.S., or Ph.D. in Computer Science or a related technical field. Possess over 5 years of experience in Data Engineering, focusing on building and maintaining data environments. Demonstrate at least 5 years of experience in designing and constructing ETL/ELT processes, managing data solutions within an SLA-driven environment. Exhibit a strong background in developing data products, APIs, and maintaining testing, monitoring, isolation, and SLA processes. Possess advanced knowledge of SQL/NoSQL databases (such as Snowflake, Redshift, MongoDB). Proficient in programming with Python or other scripting languages. Have familiarity with columnar OLAP databases and data modeling. Experience in building ELT/ETL processes using tools like dbt, AirFlow, Fivetran, CI/CD using GitHub, and reporting in Tableau. Possess excellent communication and interpersonal skills to effectively collaborate with various business stakeholders and translate requirements. Added bonus if you also have: A good understanding of Salesforce & Netsuite systems Experience in SAAS environments Designed and deployed ML models Experience with events and streaming data Location: Remote- Bengaluru,Hyderabad,Delhi / NCR,Chennai,Pune,Kolkata,Ahmedabad,Mumbai
Posted 3 weeks ago
10.0 - 14.0 years
8 - 15 Lacs
Hyderabad, Bengaluru, Delhi / NCR
Hybrid
We are seeking a Sr. Data Engineer to join our Data Engineering team within our Enterprise Data Insights organization to build data solutions, design and implement ETL/ELT processes and manage our data platform to enable our cross functional stakeholders. As a part of our Corporate Engineering division, our vision is to spearhead technology and data-led solutions and experiences to drive growth & innovation at scale. The ideal candidate will have a strong Data Engineering background, advanced Python knowledge and experience with cloud services and SQL/NoSQL databases. You will work closely with our cross functional stakeholders in Product, Finance and GTM along with Business and Enterprise Technology teams. As a Senior Data Engineer, you will: Collaborating closely with various stakeholders to prioritize requests, identify improvements, and offer recommendations. Taking the lead in analyzing, designing, and implementing data solutions, which involves constructing and designing data models and ETL processes. Cultivating collaboration with corporate engineering, product teams, and other engineering groups. Leading and mentoring engineering discussions, advocating for best practices. Actively participating in design and code reviews. Accessing and exploring third-party data APIs to determine the data required to meet business needs. Ensuring data quality and integrity across different sources and systems. Managing data pipelines for both analytics and operational purposes. Continuously enhancing processes and policies to improve SLA and SOX compliance. You'll be a great addition to the team if you have: Hold a B.S., M.S., or Ph.D. in Computer Science or a related technical field. Possess over 5 years of experience in Data Engineering, focusing on building and maintaining data environments. Demonstrate at least 5 years of experience in designing and constructing ETL/ELT processes, managing data solutions within an SLA-driven environment. Exhibit a strong background in developing data products, APIs, and maintaining testing, monitoring, isolation, and SLA processes. Possess advanced knowledge of SQL/NoSQL databases (such as Snowflake, Redshift, MongoDB). Proficient in programming with Python or other scripting languages. Have familiarity with columnar OLAP databases and data modeling. Experience in building ELT/ETL processes using tools like dbt, AirFlow, Fivetran, CI/CD using GitHub, and reporting in Tableau. Possess excellent communication and interpersonal skills to effectively collaborate with various business stakeholders and translate requiremaob Title: Senior Software Engineer Full Stack Location: Remote- Bengaluru,Hyderabad,Delhi / NCR,Chennai,Pune,Kolkata,Ahmedabad,Mumbai Timings: 11 AM 8 PM IST
Posted 3 weeks ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
AIML Engineer– Global Data Analytics, Technology (Maersk) This position will be based in India – Bangalore/Pune A.P. Moller - Maersk A.P. Moller – Maersk is the global leader in container shipping services. The business operates in 130 countries and employs 80,000 staff. An integrated container logistics company, Maersk aims to connect and simplify its customers’ supply chains. Today, we have more than 180 nationalities represented in our workforce across 131 Countries and this mean, we have elevated level of responsibility to continue to build inclusive workforce that is truly representative of our customers and their customers and our vendor partners too. We are responsible for moving 20 % of global trade & is on a mission to become the Global Integrator of Container Logistics. To achieve this, we are transforming into an industrial digital giant by combining our assets across air, land, ocean, and ports with our growing portfolio of digital assets to connect and simplify our customer’s supply chain through global end-to-end solutions, all the while rethinking the way we engage with customers and partners. The Brief In this role as an Associate AIML Engineer on the Global Data and Analytics (GDA) team, you will support the development of strategic, visibility-driven recommendation systems that serve both internal stakeholders and external customers. This initiative aims to deliver actionable insights that enhance supply chain execution, support strategic decision-making, and enable innovative service offerings. Data AI/ML (Artificial Intelligence and Machine Learning) Engineering involves the use of algorithms and statistical models to enable systems to analyse data, learn patterns, and make data-driven predictions or decisions without explicit human programming. AI/ML applications leverage vast amounts of data to identify insights, automate processes, and solve complex problems across a wide range of fields, including healthcare, finance, e-commerce, and more. AI/ML processes transform raw data into actionable intelligence, enabling automation, predictive analytics, and intelligent solutions. Data AI/ML combines advanced statistical modelling, computational power, and data engineering to build intelligent systems that can learn, adapt, and automate decisions. What I'll be doing – your accountabilities? Design, develop, and implement robust, scalable, and optimized machine learning and deep learning models, with the ability to iterate with speed Write and integrate automated tests alongside models or code to ensure reproducibility, scalability, and alignment with established quality standards Implement best practices in security, pipeline automation, and error handling using programming and data manipulation tools Identify and implement the right data-driven approaches to solve ambiguous and open-ended business problems, leveraging data engineering capabilities Research and implement new models, technologies, and methodologies and integrate these into production systems, ensuring scalability and reliability Apply creative problem-solving techniques to design innovative tools, develop algorithms and optimized workflows Independently manage and optimize data solutions, perform A/B testing, evaluate performance and evaluate performance of systems Understand technical tools and frameworks used by the team, including programming languages, libraries, and platforms and actively support debugging or refining code in projects Contribute to the design and documentation of AI/ML solutions, clearly detailing methodologies, assumptions, and findings for future reference and cross-team collaboration Collaborate across teams to develop and implement high-quality, scalable AI/ML solutions that align with business goals, address user needs, and improve performance Foundational Skills Have mastered the concepts and can demonstrate Programming skills in complex scenarios. Understands the below skills beyond the fundamentals and can demonstrate in most situations without guidance AI & Machine Learning Data Analysis Machine Learning Pipelines Model Deployment Specialized Skills To be able to understand beyond the fundamentals and can demonstrate in most situations without guidance for the following skills: Deep Learning Statistical Analysis Data Engineering Big Data Technologies Natural Language Processing (NPL) Data Architecture Data Processing Frameworks Understands the basic fundamentals of Technical Documentation and can demonstrate in common scenarios with some guidance Qualifications & Requirements BSc/MSc/PhD in computer science, data science or related discipline with 5+ years of industry experience building cloud-based ML solutions for production at scale, including solution architecture and solution design experience Good problem solving skills, for both technical and non-technical domains Good broad understanding of ML and statistics covering standard ML for regression and classification, forecasting and time-series modeling, deep learning 4+ years of hands-on experience building ML solutions in Python, incl knowledge of common python data science libraries (e.g. scikit-learn, PyTorch, etc) Hands-on experience building end-to-end data products based on AI/ML technologies Experience with collaborative development workflow: version control (we use github), code reviews, DevOps (incl automated testing), CI/CD Strong foundation with expertise in neural networks, optimization techniques and model evaluation Experience with LLMs, Transformer architectures (BERT, GPT, LLaMA, Mistral, Claude, Gemini, etc.). Proficiency in Python, LangChain, Hugging Face transformers, MLOps Experience with Reinforcement Learning and multi-agent systems for decision-making in dynamic environments. Knowledge of multimodal AI (integrating text, image, other data modalities into unified models Team player, eager to collaborate and good collaborator Preferred Experiences In addition to basic qualifications, would be great if you have… Hands-on experience with common OR solvers such as Gurobi Experience with a common dashboarding technology (we use PowerBI) or web-based frontend such as Dash, Streamlit, etc. Experience working in cross-functional product engineering teams following agile development methodologies (scrum/Kanban/…) Experience with Spark and distributed computing Strong hands-on experience with MLOps solutions, including open-source solutions. Experience with cloud-based orchestration technologies, e.g. Airflow, KubeFlow, etc Experience with containerization (Kubernetes & Docker) As a performance-oriented company, we strive to always recruit the best person for the job – regardless of gender, age, nationality, sexual orientation or religious beliefs. We are proud of our diversity and see it as a genuine source of strength for building high-performing teams. Maersk is committed to a diverse and inclusive workplace, and we embrace different styles of thinking. Maersk is an equal opportunities employer and welcomes applicants without regard to race, colour, gender, sex, age, religion, creed, national origin, ancestry, citizenship, marital status, sexual orientation, physical or mental disability, medical condition, pregnancy or parental leave, veteran status, gender identity, genetic information, or any other characteristic protected by applicable law. We will consider qualified applicants with criminal histories in a manner consistent with all legal requirements. We are happy to support your need for any adjustments during the application and hiring process. If you need special assistance or an accommodation to use our website, apply for a position, or to perform a job, please contact us by emailing accommodationrequests@maersk.com.
Posted 3 weeks ago
3.0 - 8.0 years
8 - 12 Lacs
Hyderabad
Work from Office
About the Role: We are looking for a highly skilled AI/ML Developer to join the core product team of QAPilot.io. The ideal candidate should come from a product-based or AI-first company, with a strong academic background from institutes like IITs, NITs, IIITs, or other Tier-1 engineering colleges. You will work on real-world AI problems related to test automation, software quality, and predictive engineering. Key Responsibilities: Design, build, and deploy machine learning models for intelligent QA automation Work on algorithms for test case optimization, bug prediction, pattern recognition, and data-driven QA insights Apply techniques from supervised/unsupervised learning, NLP, and deep learning Integrate ML models into the product using scalable and production-ready code Continuously improve model performance through experimentation and feedback loops Collaborate with full-stack developers, product managers, and QA experts Explore LLMs, transformers, and generative AI for advanced test data generation and analysis Required Skills & Qualifications: B.Tech / M.Tech / MS in Computer Science, Data Science, or related fields from IIT/NIT/IIIT or other top-tier institutes 3+ years of experience as an AI/ML Developer, preferably in product or AI-centric companies Strong proficiency in Python, ML libraries (scikit-learn, TensorFlow, PyTorch) Experience in NLP, LLMs, or generative AI (preferred) Hands-on with ML lifecycle: data wrangling, model training, evaluation, and deployment Familiarity with MLOps tools like MLFlow, Docker, Airflow, or cloud platforms (AWS/GCP) Prior exposure to software testing, DevOps, or developer tooling is a plus Strong analytical skills, attention to detail, and curiosity to solve open-ended problems Portfolio, GitHub, or project links demonstrating AI/ML expertise are desirable Why Join QAPilot.io: Work on an innovative AI product transforming the software QA ecosystem Join a high-impact, product-oriented engineering culture Solve challenging AI problems with real user value Collaborate with top talent from the tech and AI ecosystem Competitive salary, learning-focused environment, and growth opportunities To Apply: Please send your updated resume and any supporting links (GitHub, projects, publications).
Posted 3 weeks ago
9.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are looking to fill this opportunity for one of leading financial domain client. Position: Big Data Developer (Apache spark) Location: Pune (Hybrid) Experience: 6 – 9 years Job Description: True Hands-On Developer in Programming Languages like Java or Scala . Expertise in Apache Spark . Database modelling and working with any of the SQL or NoSQL Database is must. Working knowledge of Scripting languages like shell/python. Experience of working with Cloudera is Preferred. Orchestration tools like Airflow or Oozie would be a value addition. Knowledge of Table formats like Delta or Iceberg is plus to have. Working experience of Version controls like Git, build tools like Maven is recommended. Having software development experience is good to have along with Data Engineering experience.
Posted 3 weeks ago
13.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Lead AIML Engineer– Global Data Analytics, Technology (Maersk) This position will be based in India – Bangalore/Pune A.P. Moller - Maersk A.P. Moller – Maersk is the global leader in container shipping services. The business operates in 130 countries and employs 80,000 staff. An integrated container logistics company, Maersk aims to connect and simplify its customers’ supply chains. Today, we have more than 180 nationalities represented in our workforce across 131 Countries and this mean, we have elevated level of responsibility to continue to build inclusive workforce that is truly representative of our customers and their customers and our vendor partners too. We are responsible for moving 20 % of global trade & is on a mission to become the Global Integrator of Container Logistics. To achieve this, we are transforming into an industrial digital giant by combining our assets across air, land, ocean, and ports with our growing portfolio of digital assets to connect and simplify our customer’s supply chain through global end-to-end solutions, all the while rethinking the way we engage with customers and partners. The Brief In this role as a Lead AIML Engineer on the Global Data and Analytics (GDA) team, you will support the development of strategic, visibility-driven recommendation systems that serve both internal stakeholders and external customers. This initiative aims to deliver actionable insights that enhance supply chain execution, support strategic decision-making, and enable innovative service offerings. Data AI/ML (Artificial Intelligence and Machine Learning) Engineering involves the use of algorithms and statistical models to enable systems to analyse data, learn patterns, and make data-driven predictions or decisions without explicit human programming. AI/ML applications leverage vast amounts of data to identify insights, automate processes, and solve complex problems across a wide range of fields, including healthcare, finance, e-commerce, and more. AI/ML processes transform raw data into actionable intelligence, enabling automation, predictive analytics, and intelligent solutions. Data AI/ML combines advanced statistical modelling, computational power, and data engineering to build intelligent systems that can learn, adapt, and automate decisions. What I'll be doing – your accountabilities? Lead end-to-end AI/ML projects, from problem definition, feature selection, development, implementation of models, monitoring, retraining, infrastructure and communication of results Provide technical leadership on complex AI/ML projects, developing end-to-end machine learning pipelines, robust data models and driving innovation in engineering practices Address advanced AI/ML challenges, evaluate and optimize existing data pipelines and frameworks for efficiency and cost-effectiveness using cutting-edge techniques Architect and oversee scalable, production-ready data models and pipelines, solve complex issues and lead work on optimization and performance of models, ensuring alignment with business needs Collaborate with stakeholders and cross-functional teams and communicate insights to influence data strategy, product roadmaps, and scalable solutions through expertise in AI/ML techniques, tools, architectures, and business applications, delivering measurable positive impact Design and advocate for resilient, secure, scalable, and sustainable data / AI/ML architectures while creating modernization plans for long-term innovation and maintainability Evaluate and improve tools, methodologies, and assess industry practices to drive quality and innovation across AI/ML engineering initiatives Mentor AI/ML engineers and other talent, promoting diversity, inclusion, and leadership development across all levels Build relationships with stakeholders, champion best practices, and lead initiatives to deliver robust, scalable, future-proof data engineering solutions, while championing quality, modernization, and best practices across the organization Work across organizational boundaries to resolve challenges, influence shared roadmaps spanning multiple teams, ensuring scalable solutions prioritizing organizational objectives vs team or individual specific ones, while aligning with evolving data engineering requirements Foundational Skills Have specialized in Machine Learning Pipelines, can easily demonstrate in complex scenarios and mentors/coaches others Have mastered the concepts and can demonstrate below skills in complex scenarios Programming AI & Machine Learning Data Analysis Model Deployment Specialized Skills To be able to understand beyond the fundamentals and can demonstrate in most situations without guidance for the following skills: Deep Learning Statistical Analysis Data Engineering Big Data Technologies Natural Language Processing (NLP) Data Architecture Data Processing Frameworks Technical Documentation Technical leadership experience in Data integration and AI Agentic solutions including Connecting AI agents to various custom data sources (e.g., Databases, APIs, internal document stores). Implementing Retrieval Augmented Generation (RAG) patterns. Working with Vector Stores (e.g., Pinecone, Weaviate, ChromaDB, FAISS, etc.) and Knowledge Graphs Implementing agent memory storage and reasoning solutions, and using various Multi-Agent Frameworks (e.g., AutoGen, CrewAI, or similar). Qualifications & Requirements BSc/MSc/PhD in computer science, data science or related discipline with 13+ years of industry experience building cloud-based ML solutions for production at scale, including solution architecture and design experience 6+ years of hands-on experience building ML solutions in Python, incl knowledge of common python data science libraries (e.g. scikit-learn, PyTorch, etc) Strong understanding and implementation experience of AI Agent solutions Hands-on experience building end-to-end data products based on recommendation technologies Experience with collaborative development workflow: version control (we use github), code reviews, DevOps (incl automated testing), CI/CD Communication and leadership experience, with experience initiating, driving and delivering projects Team player, eager to collaborate Preferred Experiences In addition to basic qualifications, would be great if you have… Experience as tech lead or engineering manager (still hands-on) Experience with common dashboarding technology (we use PowerBI for now) or web-based frontend such as Dash, Streamlit, etc. Experience working in cross-functional product engineering teams following agile development methodologies (scrum/Kanban/…) Experience with Spark and distributed computing Strong hands-on experience with MLOps solutions, including open source solutions. Experience with cloud-based orchestration technologies, e.g. Airflow, KubeFlow, etc Experience with containerization: Kubernetes & Docker Experience with front-end frameworks such as React or Angular. Knowledge of data visualization using D3.js or Chart.js. As a performance-oriented company, we strive to always recruit the best person for the job – regardless of gender, age, nationality, sexual orientation or religious beliefs. We are proud of our diversity and see it as a genuine source of strength for building high-performing team Maersk is committed to a diverse and inclusive workplace, and we embrace different styles of thinking. Maersk is an equal opportunities employer and welcomes applicants without regard to race, colour, gender, sex, age, religion, creed, national origin, ancestry, citizenship, marital status, sexual orientation, physical or mental disability, medical condition, pregnancy or parental leave, veteran status, gender identity, genetic information, or any other characteristic protected by applicable law. We will consider qualified applicants with criminal histories in a manner consistent with all legal requirements. We are happy to support your need for any adjustments during the application and hiring process. If you need special assistance or an accommodation to use our website, apply for a position, or to perform a job, please contact us by emailing accommodationrequests@maersk.com.
Posted 3 weeks ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Title: Senior Data Engineer Employment Type: Full-Time Location: Ahmedabad, Onsite Experience Required: 5+ Years About Techiebutler Techiebutler is looking for an experienced Data Engineer to develop and maintain scalable, secure data solutions. You will collaborate closely with data science, business analytics, and product development teams, deploying cutting-edge technologies and leveraging best-in-class third-party tools. You will also ensure compliance with security, privacy, and regulatory standards while aligning data solutions with industry best practices. Tech Stack Languages: SQL, Python Pipeline Orchestration: Dagster (Legacy: Airflow) Data Stores: Snowflake, Clickhouse Platforms & Services: Docker, Kubernetes PaaS: AWS (ECS/EKS, DMS, Kinesis, Glue, Athena, S3) ETL: FiveTran, DBT IaC: Terraform (with Terragrunt) Key Responsibilities Design, develop, and maintain robust ETL pipelines using SQL and Python. Orchestrate data pipelines using Dagster or Airflow. Collaborate with cross-functional teams to meet data requirements and enable self-service analytics. Ensure seamless data flow using stream, batch, and Change Data Capture (CDC) processes. Use DBT for data transformation and modeling to support business needs. Monitor, troubleshoot, and improve data quality and consistency. Ensure all data solutions adhere to security, privacy, and compliance standards. Essential Experience 5+ years of experience as a Data Engineer. Strong proficiency in SQL. Hands-on experience with modern cloud data warehousing solutions (Snowflake, Big Query, Redshift) Expertise in ETL/ELT processes, batch, and streaming data processing. Proven ability to troubleshoot data issues and propose effective solutions. Knowledge of AWS services (S3, DMS, Glue, Athena). Familiarity with DBT for data transformation and modeling. Desired Experience Experience with additional AWS services (EC2, ECS, EKS, VPC, IAM). Knowledge of Infrastructure as Code (IaC) tools like Terraform and Terragrunt. Proficiency in Python for data engineering tasks. Experience with orchestration tools like Dagster, Airflow, or AWS Step Functions. Familiarity with pub-sub, queuing, and streaming frameworks (AWS Kinesis, Kafka, SQS, SNS). Experience with CI/CD pipelines and automation for data processes. Why Join Us? Opportunity to work on cutting-edge technologies and innovative data solutions. Be part of a collaborative team focused on delivering high-impact results. Competitive salary and growth opportunities. If you’re passionate about data engineering and want to take your career to the next level, apply now! We look forward to reviewing your application and potentially welcoming you to our team!
Posted 3 weeks ago
8.0 - 13.0 years
25 - 40 Lacs
Hyderabad
Hybrid
Job Title: Tech Lead GCP Data Engineer Location: Hyderabad, India Experience: 5+ Years Job Type: Full-Time Industry: IT / Software Services Functional Area: Data Engineering / Cloud / Analytics Role Category: Cloud Data Engineering Position Overview We are seeking a GCP Data Engineer with strong expertise in SQL , Python , and Google Cloud Platform (GCP) services including BigQuery , Cloud Composer , and Airflow . The ideal candidate will play a key role in building scalable, high-performance data solutions to support marketing analytics initiatives. This role involves collaboration with cross-functional global teams and provides an opportunity to work on cutting-edge technologies in a dynamic marketing data landscape. Key Responsibilities Lead technical teams and coordinate with global stakeholders. Manage and estimate data development tasks and delivery timelines. Build and optimize data pipelines using GCP , especially BigQuery , Cloud Storage , and Cloud Composer . Work with Airflow DAGs , REST APIs, and data orchestration workflows. Collaborate on development and debugging of ETL pipelines , including IICS and Ascend IO (preferred). Perform complex data analysis across multiple sources to support business goals. Implement CI/CD pipelines and manage version control using Git. Troubleshoot and upgrade existing data systems and ETL chains. Contribute to data quality, performance optimization, and cloud-native solution design. Required Skills & Qualifications Bachelors or Masters in Computer Science, IT, or related field. 5+ years of experience in Data Engineering or relevant roles. Strong expertise in GCP , BigQuery , Cloud Composer , and Airflow . Proficient in SQL , Python , and REST API development. Hands-on experience with IICS , MySQL , and data warehousing solutions. Knowledge of ETL tools like Ascend IO is a plus. Exposure to marketing analytics tools (e.g., Google Analytics, Blueconic, Klaviyo) is desirable. Familiarity with performance marketing concepts (segmentation, A/B testing, attribution modeling, etc.). Excellent communication and analytical skills. GCP certification is a strong plus. Experience working in Agile environments. To Apply, Send Your Resume To:krishnanjali.m@technogenindia.com
Posted 3 weeks ago
6.0 - 11.0 years
7 - 10 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
We are seeking a Sr. Data Engineer to join our Data Engineering team within our Enterprise Data Insights organization to build data solutions, design and implement ETL/ELT processes and manage our data platform to enable our cross functional stakeholders. As a part of our Corporate Engineering division, our vision is to spearhead technology and data-led solutions and experiences to drive growth & innovation at scale. The ideal candidate will have a strong Data Engineering background, advanced Python knowledge and experience with cloud services and SQL/NoSQL databases. You will work closely with our cross functional stakeholders in Product, Finance and GTM along with Business and Enterprise Technology teams. As a Senior Data Engineer, you will: Collaborating closely with various stakeholders to prioritize requests, identify improvements, and offer recommendations. Taking the lead in analyzing, designing, and implementing data solutions, which involves constructing and designing data models and ETL processes. Cultivating collaboration with corporate engineering, product teams, and other engineering groups. Leading and mentoring engineering discussions, advocating for best practices. Actively participating in design and code reviews. Accessing and exploring third-party data APIs to determine the data required to meet business needs. Ensuring data quality and integrity across different sources and systems. Managing data pipelines for both analytics and operational purposes. Continuously enhancing processes and policies to improve SLA and SOX compliance. You'll be a great addition to the team if you have: Hold a B.S., M.S., or Ph.D. in Computer Science or a related technical field. Possess over 5 years of experience in Data Engineering, focusing on building and maintaining data environments. Demonstrate at least 5 years of experience in designing and constructing ETL/ELT processes, managing data solutions within an SLA-driven environment. Exhibit a strong background in developing data products, APIs, and maintaining testing, monitoring, isolation, and SLA processes. Possess advanced knowledge of SQL/NoSQL databases (such as Snowflake, Redshift, MongoDB). Proficient in programming with Python or other scripting languages. Have familiarity with columnar OLAP databases and data modeling. Experience in building ELT/ETL processes using tools like dbt, AirFlow, Fivetran, CI/CD using GitHub, and reporting in Tableau. Possess excellent communication and interpersonal skills to effectively collaborate with various business stakeholders and translate requirements. Added bonus if you also have: A good understanding of Salesforce & Netsuite systems Experience in SAAS environments Designed and deployed ML models Experience with events and streaming data Location - Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote
Posted 3 weeks ago
5.0 - 10.0 years
15 - 30 Lacs
Hyderabad
Work from Office
Greetings from Technogen !!! We thank you for taking time about your competencies and skills, while allowing us an opportunity to explain about us and our Technogen , we understand that your experience and expertise are relevant the current open with our clients. About Technogen : TechnoGen Brief Overview :- LinkedIn : https://www.linkedin.com/company/technogeninc/about/ TechnoGen, Inc. is an ISO 9001:2015, ISO 20000-1:2011, ISO 27001:2013, and CMMI Level 3 Global IT Services Company headquartered in Chantilly, Virginia. TechnoGen, Inc. (TGI) is a Minority & Women-Owned Small Business with over 20 years of experience providing end-to-end IT Services and Solutions to the Public and Private sectors. TGI provides highly skilled and certied professionals and has successfully executed more than 345 projects. TechnoGen is committed to helping our clients solve complex problems and achieve their goals, on time and under budget. Please share below details for further processing of your profile. Total years of experience: Relevant years of experience: CTC (Including Variable): ECTC: Notice Period: Reason for change: Current location: Job Title :GCP Data Engineer Required Experience : 5+ years Work Mode: WFO-4 Days from Office. Shift Time : UK Shift Time-12:00 PM IST to 09:00 PM IST. Location : Hyderabad. Job Summary :- As a GCP Data Engineer, we need someone with strong experience in SQL and Python. The ideal candidate should have hands-on expertise in Google Cloud Platform (GCP) services, especially BigQuery, Composer, Airflow framework and a solid understanding of data engineering best practices. You will work closely with our internal teams and technology partners to deliver comprehensive and scalable marketing data and analytics solutions. This role offers the unique opportunity to engage in many technology platforms in a rapidly evolving marketing technology landscape. Key Responsibilities: • Technical oversight and team management of the developers, coordination with US based Mattel resources, and perform estimation of work. Strong knowledge in cloud computing platforms - Google Cloud Expertise in MySQL & SQL/PL Good Experience in IICS Experience in ETL Ascend IO is added advantage GCP & BigQuery knowledge is must, GCP certification is added advantage Good experience in Google Cloud Storage (GCS), Cloud Composer, DAGs , Airflow REST API development experience Good in analytical and problem solving, efficient communication Experience in designing, implementing, and managing various ETL job execution flows. Utilize Git for source version control. Set up and maintain CI/CD pipelines. Troubleshoot, debug, and upgrade existing application & ETL job chains. Comprehensive data analysis across complex data sets Ability to collaborate effectively across technical development teams and business departments Qualifications: Bachelors or Masters degree in Computer Science, Information Technology, or a related field. 5+ years of experience in data engineering or related roles Strong understanding of Google Cloud Platform and associated tools. Proven experience in delivering consumer marketing data and analytics solutions for enterprise clients. Strong knowledge of data management, ETL processes, data warehousing, and analytics platforms. Experience with SQL and NoSQL databases. Proficiency in Python programming languages. Hands-on experience with data warehousing solutions Knowledge of marketing analytics tools and technologies, including but not limited to Google Analytics, Blueconic, Klaviyo, etc. Knowledge of performance marketing concepts such as targeting & segmentation, real-time optimization, A/B testing, attribute modeling, etc. Excellent communication skills with a track record of collaboration across multiple teams Strong collaboration skills and team-oriented mindset. Strong problem-solving skills, adaptability, and the ability to thrive in a dynamic and rapidly changing environment. Experience working in Agile development environments Best Regards, Syam.M | Sr.IT Recruiter syambabu.m@technogenindia.com www.technogenindia.com | Follow us on LinkedIn
Posted 3 weeks ago
3.0 - 5.0 years
5 - 12 Lacs
Hyderabad, Chennai
Work from Office
Greetings !!! Hiring for GCP Data Engineers for Chennai/Hyderabad location. Skills - GCP , Python Pyspark , Python , Airflow , SQL. Location - Chennai/Hyderabad (WFO) Experience - 3 to 5 years Interested one can share their resumes to anmol.bhatia@incedoinc.com
Posted 3 weeks ago
7.0 years
8 - 9 Lacs
Thiruvananthapuram
On-site
7 - 9 Years 4 Openings Trivandrum Role description Role Proficiency: This role requires proficiency in developing data pipelines including coding and testing for ingesting wrangling transforming and joining data from various sources. The ideal candidate should be adept in ETL tools like Informatica Glue Databricks and DataProc with strong coding skills in Python PySpark and SQL. This position demands independence and proficiency across various data domains. Expertise in data warehousing solutions such as Snowflake BigQuery Lakehouse and Delta Lake is essential including the ability to calculate processing costs and address performance issues. A solid understanding of DevOps and infrastructure needs is also required. Outcomes: Act creatively to develop pipelines/applications by selecting appropriate technical options optimizing application development maintenance and performance through design patterns and reusing proven solutions. Support the Project Manager in day-to-day project execution and account for the developmental activities of others. Interpret requirements create optimal architecture and design solutions in accordance with specifications. Document and communicate milestones/stages for end-to-end delivery. Code using best standards debug and test solutions to ensure best-in-class quality. Tune performance of code and align it with the appropriate infrastructure understanding cost implications of licenses and infrastructure. Create data schemas and models effectively. Develop and manage data storage solutions including relational databases NoSQL databases Delta Lakes and data lakes. Validate results with user representatives integrating the overall solution. Influence and enhance customer satisfaction and employee engagement within project teams. Measures of Outcomes: TeamOne's Adherence to engineering processes and standards TeamOne's Adherence to schedule / timelines TeamOne's Adhere to SLAs where applicable TeamOne's # of defects post delivery TeamOne's # of non-compliance issues TeamOne's Reduction of reoccurrence of known defects TeamOne's Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirementst Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). TeamOne's Average time to detect respond to and resolve pipeline failures or data issues. TeamOne's Number of data security incidents or compliance breaches. Outputs Expected: Code: Develop data processing code with guidance ensuring performance and scalability requirements are met. Define coding standards templates and checklists. Review code for team and peers. Documentation: Create/review templates checklists guidelines and standards for design/process/development. Create/review deliverable documents including design documents architecture documents infra costing business requirements source-target mappings test cases and results. Configure: Define and govern the configuration management plan. Ensure compliance from the team. Test: Review/create unit test cases scenarios and execution. Review test plans and strategies created by the testing team. Provide clarifications to the testing team. Domain Relevance: Advise data engineers on the design and development of features and components leveraging a deeper understanding of business needs. Learn more about the customer domain and identify opportunities to add value. Complete relevant domain certifications. Manage Project: Support the Project Manager with project inputs. Provide inputs on project plans or sprints as needed. Manage the delivery of modules. Manage Defects: Perform defect root cause analysis (RCA) and mitigation. Identify defect trends and implement proactive measures to improve quality. Estimate: Create and provide input for effort and size estimation and plan resources for projects. Manage Knowledge: Consume and contribute to project-related documents SharePoint libraries and client universities. Review reusable documents created by the team. Release: Execute and monitor the release process. Design: Contribute to the creation of design (HLD LLD SAD)/architecture for applications business components and data models. Interface with Customer: Clarify requirements and provide guidance to the Development Team. Present design options to customers. Conduct product demos. Collaborate closely with customer architects to finalize designs. Manage Team: Set FAST goals and provide feedback. Understand team members' aspirations and provide guidance and opportunities. Ensure team members are upskilled. Engage the team in projects. Proactively identify attrition risks and collaborate with BSE on retention measures. Certifications: Obtain relevant domain and technology certifications. Skill Examples: Proficiency in SQL Python or other programming languages used for data manipulation. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery). Conduct tests on data pipelines and evaluate results against data quality and performance specifications. Experience in performance tuning. Experience in data warehouse design and cost improvements. Apply and optimize data models for efficient storage retrieval and processing of large datasets. Communicate and explain design/development aspects to customers. Estimate time and resource requirements for developing/debugging features/components. Participate in RFP responses and solutioning. Mentor team members and guide them in relevant upskilling and certification. Knowledge Examples: Knowledge Examples Knowledge of various ETL services used by cloud providers including Apache PySpark AWS Glue GCP DataProc/Dataflow Azure ADF and ADLF. Proficient in SQL for analytics and windowing functions. Understanding of data schemas and models. Familiarity with domain-related data. Knowledge of data warehouse optimization techniques. Understanding of data security concepts. Awareness of patterns frameworks and automation practices. Additional Comments: We are seeking a highly experienced Senior Data Engineer to design, develop, and optimize scalable data pipelines in a cloud-based environment. The ideal candidate will have deep expertise in PySpark, SQL, Azure Databricks, and experience with either AWS or GCP. A strong foundation in data warehousing, ELT/ETL processes, and dimensional modeling (Kimball/star schema) is essential for this role. Must-Have Skills 8+ years of hands-on experience in data engineering or big data development. Strong proficiency in PySpark and SQL for data transformation and pipeline development. Experience working in Azure Databricks or equivalent Spark-based cloud platforms. Practical knowledge of cloud data environments – Azure, AWS, or GCP. Solid understanding of data warehousing concepts, including Kimball methodology and star/snowflake schema design. Proven experience designing and maintaining ETL/ELT pipelines in production. Familiarity with version control (e.g., Git), CI/CD practices, and data pipeline orchestration tools (e.g., Airflow, Azure Data Factory Skills Azure Data Factory,Azure Databricks,Pyspark,Sql About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.
Posted 3 weeks ago
0 years
0 Lacs
Cochin
On-site
A Duct Fabricator is responsible for creating, assembling, and sometimes installing sheet metal ductwork used in HVAC (Heating, Ventilation, and Air Conditioning) systems. A Duct Fitter, also known as a Sheet Metal Duct Installer, is responsible for fabricating, installing, and maintaining ductwork systems for heating, ventilation, and air conditioning (HVAC). They work with sheet metal to create and assemble ducts according to blueprints and specifications, ensuring proper airflow and energy efficiency within buildings. Job Type: Permanent Pay: ₹8,086.00 - ₹41,407.41 per month Benefits: Health insurance Paid sick time Provident Fund Work Location: In person
Posted 3 weeks ago
5.0 years
4 - 8 Lacs
Hyderābād
On-site
We're looking for a Senior Data Engineer This role is Office Based, Hyderabad Office As a Senior Data Engineer is responsible for creating effective technological solutions for work. They are also responsible for managing a team of specialists. In addition, their roles include quality control of the work performed. In this role you will Closely works with the Product team to gather requirements and convert them into the technical design Lead the team in the design of technology solutions that meet the business needs in terms of sustainability, scalability, performance, and security. Overall responsibility for the, and technical development of Data Engineering. Collaboration with various delivery teams in the low-level design of data-oriented, ELT, or ETL projects in response to product requirements. Responsibility for implementing, disseminating, and adhering to CSOD's Data Engineering methodologies, processes & principles. You’ve got what it takes if you have: 5+ years of experience Demonstrable experience in delivering complex technology solutions Proven ability to quickly adapt to new technologies, concepts, and approaches – Essential Demonstrable experience in ETL/ELT processes – Essential Proven GCP, AWS, Confluent, and Elastic Cloud experience - Essential An understanding of Cloud Technologies and their application and benefits e.g. Google, AWS – Essential Should be well versed in the orchestration tools like Airflow - Essential Programs involved in processing and transformation – Essential Proven expertise in Phyton Enterprise level Business to Consumer database (MySQL, Influx, Postgres, NoSQL, and so on) – Essential in MYSQL, Desirable in Others Highly professional individual with excellent written and verbal communication skills – Essential Enterprise-level BI or visualization programs (Looker, QuickSight, Tableau, Qlik, Power BI, etc.) – Desirable Automation and AI Experience in the Data side – Desirable Good understanding of Agile, Estimation and Sprint planning, and so on. #LI-Onsite Our Culture: Spark Greatness. Shatter Boundaries. Share Success. Are you ready? Because here, right now – is where the future of work is happening. Where curious disruptors and change innovators like you are helping communities and customers enable everyone – anywhere – to learn, grow and advance. To be better tomorrow than they are today. Who We Are: Cornerstone powers the potential of organizations and their people to thrive in a changing world. Cornerstone Galaxy, the complete AI-powered workforce agility platform, meets organizations where they are. With Galaxy, organizations can identify skills gaps and development opportunities, retain and engage top talent, and provide multimodal learning experiences to meet the diverse needs of the modern workforce. More than 7,000 organizations and 100 million+ users in 180+ countries and in nearly 50 languages use Cornerstone Galaxy to build high-performing, future-ready organizations and people today. Check us out on LinkedIn, Comparably, Glassdoor, and Facebook!
Posted 3 weeks ago
0 years
0 Lacs
India
On-site
Location: IN - Hyderabad Telangana Goodyear Talent Acquisition Representative: Katrena Calimag-Rupera Sponsorship Available: No Relocation Assistance Available: No STAFF DIGITAL SOFTWARE ENGINEER – Data Engineer Are you interested in an exciting opportunity to help shape the user experience and design front-end applications for data-driven digital products that drive better process performance across a global company? The Data Driven Engineering and Global Information Technology groups group at the Goodyear Technology India Center, Hyderabad, India is looking for a dynamic individual with strong background in data engineering and infrastructure to partner with data scientists, information technology specialists as well as our global technology and operations teams to derive valuable insights from our expansive data sources and help develop data-driven solutions for important business applications across the company. Since its inception, the Data Science portfolio of projects continues to grow and includes areas of tire manufacturing, operations, business, and technology. The people in our Data Science group come from a broad range of backgrounds: Mathematics, Statistics, Cognitive Linguistics, Astrophysics, Biology, Computer Science, Mechanical, Electrical, Chemical, and Industrial Engineering, and of course - Data Science. This diverse group works together to develop innovative tools and methods for simulating, modeling, and analyzing complex processes throughout our company. We’d like you to help us build the next generation of data-driven applications for the company and be a part of the Information Technology and Data Driven Engineering teams. What You Will Do We think you’ll be excited about having opportunities to: Design and build robust, scalable, and efficient data pipelines and ETL processes to support analytics, data science, and digital products. Collaborate with cross-functional teams to understand data requirements and implement solutions that integrate data from diverse sources. Lead the development, management, and optimization of cloud-based data infrastructure using platforms such as AWS, Azure, or GCP. Architect and maintain highly available and performant relational database systems (e.g., PostgreSQL, MySQL) and NoSQL systems (e.g., MongoDB, DynamoDB). Partner with data scientists to ensure efficient and secure data access for modeling, experimentation, and production deployment. Build and maintain data services and APIs to facilitate access to curated datasets across internal applications and teams. Implement DevOps and DataOps practices including CI/CD for data workflows, infrastructure as code, containerization (Docker), and orchestration (Kubernetes). Learn about the tire industry and tire manufacturing processes from subject matter experts. Be a part of cross-functional teams working together to deliver impactful results. What We Expect Bachelor’s degree in computer science or a similar technical field; preferred: Master’s degree in computer science or a similar field 5 or more years of experience designing and maintaining data pipelines, cloud-based data systems, and production-grade data workflows. Experience with the following technology groups: Strong experience in Python, Java, or other languages for data engineering and scripting. Deep knowledge of relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, DynamoDB), including query optimization and schema design. Experience designing and deploying solutions on cloud platforms like AWS (e.g., S3, Redshift, RDS), Azure, or GCP. Familiarity with data modeling, data warehousing, and distributed data processing frameworks (e.g., Apache Spark, Airflow, dbt). Understanding of RESTful APIs and integration of data services with applications. Hands-on experience with CI/CD tools (e.g., GitHub Actions, Jenkins), Docker, Kubernetes, and infrastructure-as-code frameworks. Solid grasp of software engineering best practices, including code versioning, testing, and performance optimization. Good teamwork skills - ability to work in a team environment and deliver results on time. Strong communication skills - capable of conveying information concisely to diverse audiences. Goodyear is an Equal Employment Opportunity and Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to that individual's race, color, religion or creed, national origin or ancestry, sex (including pregnancy), sexual orientation, gender identity, age, physical or mental disability, ethnicity, citizenship, or any other characteristic protected by law. Goodyear is one of the world’s largest tire companies. It employs about 68,000 people and manufactures its products in 53 facilities in 20 countries around the world. Its two Innovation Centers in Akron, Ohio and Colmar-Berg, Luxembourg strive to develop state-of-the-art products and services that set the technology and performance standard for the industry. For more information about Goodyear and its products, go to www.goodyear.com/corporate
Posted 3 weeks ago
9.0 - 14.0 years
4 Lacs
Bengaluru
Hybrid
Total Experience- 9 Years and above Location- Bangalore NP- Immediate to max 15 days Job Description Senior resources to work on the Batch AI platform. The core skillsets required are: Python, Ray, Spark, Hive, Iceberg, Kubernetes, Airflow, Druid, Superset. AIML background is preferred but not mandatory. Design/architecture experience is preferred along with strong hands on skills. Someone with 10+ years and strong track record may be a good fit. Candidate needs to be based in Bangalore. About Us: Grid Dynamics (Nasdaq:GDYN) is a digital-native technology services provider that accelerates growth and bolsters competitive advantage for Fortune 1000 companies. Grid Dynamics provides digital transformation consulting and implementation services in omnichannel customer experience, big data analytics, search, artificial intelligence, cloud migration, and application modernization. Grid Dynamics achieves high speed-to-market, quality, and efficiency by using technology accelerators, an agile delivery culture, and its pool of global engineering talent. Founded in 2006, Grid Dynamics is headquartered in Silicon Valley with offices across the US, UK, Netherlands, Mexico, India, Central and Eastern Europe. To learn more about Grid Dynamics, please visit www.griddynamics.com . Follow us on Facebook , Twitter , and LinkedIn . --
Posted 3 weeks ago
3.0 years
4 - 6 Lacs
Gurgaon
On-site
Company Description At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future. Job Description This role will be part of a team that develops software that processes data captured every day from over a quarter of a million Computer and Mobile devices worldwide. Measuring panelists activities as they surf the Internet via Browsers, or utilizing Mobile App’s download from Apple’s and Google’s store. The Nielsen software meter used to capture this usage data has been optimized to be unobtrusive yet gather many biometric data points that the backend system can use to identify who is using the device, and also detect fraudulent behavior. The Software Engineer is ultimately responsible for delivering technical solutions: starting from the project's onboard until post launch support and including design, development, testing. It is expected to coordinate, support and work with multiple delocalized project teams in multiple regions. As a member of the technical staff with our Digital Meter Processing team, you will further develop the backend system that processes massive amounts of data every day, across 3 different AWS regions. Your role will involve designing, implementing, and maintaining robust, scalable solutions that leverage a Java based system that runs in an AWS environment. You will play a key role in shaping the technical direction of our projects and mentoring other team members. Qualifications Responsibilities System Deployment: Conceive, design and build new features in the existing backend processing pipelines. CI/CD Implementation: Design and implement CI/CD pipelines for automated build, test, and deployment processes. Ensure continuous integration and delivery of features, improvements, and bug fixes. Code Quality and Best Practices: Enforce coding standards, best practices, and design principles. Conduct code reviews and provide constructive feedback to maintain high code quality. Performance Optimization: Identify and address performance bottlenecks in both reading, processing and writing data to the backend data stores. Mentorship and Collaboration: Mentor junior engineers, providing guidance on technical aspects and best practices. Collaborate with cross-functional teams to ensure a cohesive and unified approach to software development. Security and Compliance: Implement security best practices for all tiers of the system. Ensure compliance with industry standards and regulations related to AWS platform security. Key Skills Bachelor's or Master’s degree in Computer Science, Software Engineering, or a related field. Proven experience, minimum 3 years, in high-volume data processing development expertise using ETL tools such as AWS Glue or PySpark, Java, SQL and databases such as Postgres Minimum 2 years development on an AWS platform Strong understanding of CI/CD principles and tools. GitLab a plus Excellent problem-solving and debugging skills. Strong communication and collaboration skills with ability to communicate complex technical concepts and align organization on decisions Sound problem-solving skills with the ability to quickly process complex information and present it clearly and simply Utilizes team collaboration to create innovative solutions efficiently Other desirable skills Knowledge of networking principles and security best practices. AWS certifications Experience with Data Warehouses, ETL, and/or Data Lakes very desirable Experience with RedShift, Airflow, Python, Lambda, Prometheus, Grafana, & OpsGeni a bonus Exposure to the Google Cloud Platform (GCP) Additional Information Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels.
Posted 3 weeks ago
4.0 - 8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Software: fuel for mobility We bring bold digital visions to life. So we’re on the lookout for more curious and creative engineers who want to create change – one line of high-quality code at a time. Our transformation isn't for everyone, but if you're excited about solving the leading-edge technological challenges facing the auto industry, then let’s talk about your next move. Let's introduce ourselves At Volvo Cars, curiosity, collaboration, and continuous learning define our culture. Join our mission to create sustainable transportation solutions that protect what matters most – people, communities, and the planet. As a Data Engineer, you will drive digital innovation, leading critical technology initiatives with global teams. You’ll design and implement solutions impacting millions worldwide, supporting Volvo’s vision for autonomous, electric, and connected vehicles. What You'll Do Technical Leadership & Development Lead development and implementation using AirFlow, Amazon Web Services (AWS), Azure, Azure Data Factory (ADF), Big Data and Analytics, Core Data, Data Analysis, ETL/ELT, PowerBI, SQL / SQL Script, Snowflake Design, build, and maintain scalable solutions supporting global operations Collaborate closely with USA stakeholders across product management and engineering Promote technical excellence through code reviews, architecture decisions, and best practices Cross-Functional Collaboration Partner internationally using Microsoft Teams, Slack, SharePoint, and Azure DevOps Participate in Agile processes and sprint planning Share knowledge and maintain technical documentation across regions Support 24/7 operations through on-call rotations and incident management Innovation & Continuous Improvement Research emerging technologies to enhance platform capabilities Contribute to roadmap planning and architecture decisions Mentor junior team members and encourage knowledge sharing What You'll Bring Professional Experience 4 -8 years hands-on experience in software development, system administration, or related fields Deep expertise in AirFlow, AWS, Azure, ADF, Big Data, Core Data, Data Analysis, ETL/ELT, PowerBI, SQL, Snowflake with proven implementation success Experience collaborating with global teams across time zones Preferred industry knowledge in automotive, manufacturing, or enterprise software Technical Proficiency Advanced skills in core technologies: AirFlow, AWS, Azure, ADF, Big Data, Core Data, Data Analysis, ETL/ELT, PowerBI, SQL, Snowflake Strong grasp of cloud platforms, DevOps, and CI/CD pipelines Experience with enterprise integration and microservices architecture Skilled in database design and optimization with SQL and NoSQL Essential Soft Skills Analytical Thinking, Collaboration, Communication Skills, Critical Thinking, Documentation Best Practices, Problem Solving, Written Communication Excellent communication, able to explain complex technical topics Adaptable in multicultural, globally distributed teams Strong problem-solving abilities Additional Qualifications Business-level English fluency Flexibility to collaborate across USA time zones Volvo Cars – driving change together Volvo Cars’ success is the result of a collaborative, diverse and inclusive working environment. Today, we’re one of the most well-known and respected car brands, with around 43,000 employees across the globe. At Volvo Cars, your career is designed around your skills and aspirations, so you can reach your fullest potential. And it’s so exciting – we’re well on our way on our journey towards full electrification. We have five fully electric cars already on the market, and five more on the way. Our fully-electric and plug-in hybrid cars combined make up almost 50 per cent of our sales. So come and join us in shaping the future of mobility. There’s never been a more rewarding time to play your part in our inspiring and creative teams!
Posted 3 weeks ago
6.0 years
0 Lacs
Ahmedabad
On-site
About YipitData: YipitData is the leading market research and analytics firm for the disruptive economy and recently raised up to $475M from The Carlyle Group at a valuation over $1B. We analyze billions of alternative data points every day to provide accurate, detailed insights on ridesharing, e-commerce marketplaces, payments, and more. Our on-demand insights team uses proprietary technology to identify, license, clean, and analyze the data many of the world's largest investment funds and corporations depend on. For three years and counting, we have been recognized as one of Inc's Best Workplaces . We are a fast-growing technology company backed by The Carlyle Group and Norwest Venture Partners. Our offices are located in NYC, Austin, Miami, Denver, Mountain View, Seattle , Hong Kong, Shanghai, Beijing, Guangzhou, and Singapore. We cultivate a people-centric culture focused on mastery, ownership, and transparency. Why You Should Apply NOW: You'll be working with many strategic engineering leaders within the company. You'll report directly to the Director of Data Engineering. You will help build out our Data Engineering team presence in India. You will work with a Global team. You'll be challenged with a lot of big data problems. About The Role: We are seeking a highly skilled Senior Data Engineer to join our dynamic Data Engineering team. The ideal candidate possesses 6-8 years of data engineering experience. An excellent candidate should have a solid understanding of Spark and SQL, and have data pipeline experience. Hired individuals will play a crucial role in helping to build out our data engineering team to support our strategic pipelines and optimize for reliability, efficiency, and performance. Additionally, Data Engineering serves as the gold standard for all other YipitData analyst teams, building and maintaining the core pipelines and tooling that power our products. This high-impact, high-visibility team is instrumental to the success of our rapidly growing business. This is a unique opportunity to be the first hire in this team, with the potential to build and lead the team as their responsibilities expand. This is a hybrid opportunity based in India. During training and onboarding, we will expect several hours of overlap with US working hours. Afterward, standard IST working hours are permitted with the exception of 1-2 days per week, when you will join meetings with the US team. As Our Senior Data Engineer You Will: Report directly to the Senior Manager of Data Engineering, who will provide significant, hands-on training on cutting-edge data tools and techniques. Build and maintain end-to-end data pipelines. Help with setting best practices for our data modeling and pipeline builds. Create documentation, architecture diagrams, and other training materials. Become an expert at solving complex data pipeline issues using PySpark and SQL. Collaborate with stakeholders to incorporate business logic into our central pipelines. Deeply learn Databricks, Spark, and other ETL toolings developed internally. You Are Likely To Succeed If: You hold a Bachelor's or Master's degree in Computer Science, STEM, or a related technical discipline. You have 6+ years of experience as a Data Engineer or in other technical functions. You are excited about solving data challenges and learning new skills. You have a great understanding of working with data or building data pipelines. You are comfortable working with large-scale datasets using PySpark, Delta, and Databricks. You understand business needs and the rationale behind data transformations to ensure alignment with organizational goals and data strategy. You are eager to constantly learn new technologies. You are a self-starter who enjoys working collaboratively with stakeholders. You have exceptional verbal and written communication skills. Nice to have: Experience with Airflow, dbt, Snowflake, or equivalent. What We Offer: Our compensation package includes comprehensive benefits, perks, and a competitive salary: We care about your personal life and we mean it. We offer vacation time, parental leave, team events, learning reimbursement, and more! Your growth at YipitData is determined by the impact that you are making, not by tenure, unnecessary facetime, or office politics. Everyone at YipitData is empowered to learn, self-improve, and master their skills in an environment focused on ownership, respect, and trust. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, marital status, disability, gender, gender identity or expression, or veteran status. We are proud to be an equal-opportunity employer. Job Applicant Privacy Notice
Posted 3 weeks ago
5.0 years
4 - 7 Lacs
Mehsana
On-site
#LI-DS2 Job Summary: Using analytical and experimental techniques, lead the development of fans and airflow systems in terms of noise, strength analysis, thermal flow analysis and manufacturability. Responsibility: Through numerical and experimental analysis, develop new types of fans (propeller, cross-flow, centrifugal, sirocco) for air conditioners that improve the fluid performance and reduce noise. To do thermal flow analysis with in system and around system to improve thermal efficiency of product at development stage and analyze thermal efficiency at customer site. Work with various stakeholders, including the members of platform design department, each module, and the production technology department, to develop a fan that maintains performance and noise levels without sacrificing strength or productivity. Propose and design prototypes and experimental equipment that will lead to the evaluation of subsystems, including fans and shrouds. Educational Qualification: Master’s degree (or equivalent) in fluid mechanics and aerodynamics dealing with the flow around fans, turbomachinery design, rotating machinery, and CFD analysis. Working experience: at least 5 years of fan design, thermal analysis or research experience Skill requirements: Communication and presentation skills. Ability to make objective decisions in collaboration with managers to ensure that the right decisions are made. Ability to make judgments that enable correct responses to stakeholder comments. Ability to propose new approaches to problems Language: Excellent communication skills. (Fluent English, both written and spoken is preferred.) Location: kadi
Posted 3 weeks ago
5.0 - 7.0 years
0 Lacs
Noida
On-site
5 - 7 Years 2 Openings Noida Role description Role Proficiency: This role requires proficiency in data pipeline development including coding and testing data pipelines for ingesting wrangling transforming and joining data from various sources. Must be skilled in ETL tools such as Informatica Glue Databricks and DataProc with coding expertise in Python PySpark and SQL. Works independently and has a deep understanding of data warehousing solutions including Snowflake BigQuery Lakehouse and Delta Lake. Capable of calculating costs and understanding performance issues related to data solutions. Outcomes: Act creatively to develop pipelines and applications by selecting appropriate technical options optimizing application development maintenance and performance using design patterns and reusing proven solutions.rnInterpret requirements to create optimal architecture and design developing solutions in accordance with specifications. Document and communicate milestones/stages for end-to-end delivery. Code adhering to best coding standards debug and test solutions to deliver best-in-class quality. Perform performance tuning of code and align it with the appropriate infrastructure to optimize efficiency. Validate results with user representatives integrating the overall solution seamlessly. Develop and manage data storage solutions including relational databases NoSQL databases and data lakes. Stay updated on the latest trends and best practices in data engineering cloud technologies and big data tools. Influence and improve customer satisfaction through effective data solutions. Measures of Outcomes: Adherence to engineering processes and standards Adherence to schedule / timelines Adhere to SLAs where applicable # of defects post delivery # of non-compliance issues Reduction of reoccurrence of known defects Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirements Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). Average time to detect respond to and resolve pipeline failures or data issues. Number of data security incidents or compliance breaches. Outputs Expected: Code Development: Develop data processing code independently ensuring it meets performance and scalability requirements. Define coding standards templates and checklists. Review code for team members and peers. Documentation: Create and review templates checklists guidelines and standards for design processes and development. Create and review deliverable documents including design documents architecture documents infrastructure costing business requirements source-target mappings test cases and results. Configuration: Define and govern the configuration management plan. Ensure compliance within the team. Testing: Review and create unit test cases scenarios and execution plans. Review the test plan and test strategy developed by the testing team. Provide clarifications and support to the testing team as needed. Domain Relevance: Advise data engineers on the design and development of features and components demonstrating a deeper understanding of business needs. Learn about customer domains to identify opportunities for value addition. Complete relevant domain certifications to enhance expertise. Project Management: Manage the delivery of modules effectively. Defect Management: Perform root cause analysis (RCA) and mitigation of defects. Identify defect trends and take proactive measures to improve quality. Estimation: Create and provide input for effort and size estimation for projects. Knowledge Management: Consume and contribute to project-related documents SharePoint libraries and client universities. Review reusable documents created by the team. Release Management: Execute and monitor the release process to ensure smooth transitions. Design Contribution: Contribute to the creation of high-level design (HLD) low-level design (LLD) and system architecture for applications business components and data models. Customer Interface: Clarify requirements and provide guidance to the development team. Present design options to customers and conduct product demonstrations. Team Management: Set FAST goals and provide constructive feedback. Understand team members' aspirations and provide guidance and opportunities for growth. Ensure team engagement in projects and initiatives. Certifications: Obtain relevant domain and technology certifications to stay competitive and informed. Skill Examples: Proficiency in SQL Python or other programming languages used for data manipulation. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery). Conduct tests on data pipelines and evaluate results against data quality and performance specifications. Experience in performance tuning of data processes. Expertise in designing and optimizing data warehouses for cost efficiency. Ability to apply and optimize data models for efficient storage retrieval and processing of large datasets. Capacity to clearly explain and communicate design and development aspects to customers. Ability to estimate time and resource requirements for developing and debugging features or components. Knowledge Examples: Knowledge Examples Knowledge of various ETL services offered by cloud providers including Apache PySpark AWS Glue GCP DataProc/DataFlow Azure ADF and ADLF. Proficiency in SQL for analytics including windowing functions. Understanding of data schemas and models relevant to various business contexts. Familiarity with domain-related data and its implications. Expertise in data warehousing optimization techniques. Knowledge of data security concepts and best practices. Familiarity with design patterns and frameworks in data engineering. Additional Comments: Skills Cloud Platforms ( AWS, MS Azure, GC etc.) Containerization and Orchestration ( Docker, Kubernetes etc..) APIs - Change APIs to APIs development Data Pipeline construction using languages like Python, PySpark, and SQL Data Streaming (Kafka and Azure Event Hub etc..) Data Parsing ( Akka and MinIO etc..) Database Management ( SQL and NoSQL, including Clickhouse, PostgreSQL etc..) Agile Methodology ( Git, Jenkins, or Azure DevOps etc..) JS like Connectors/ framework for frontend/backend Collaboration and Communication Skills Aws Cloud,Azure Cloud,Docker,Kubernetes About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
40175 Jobs | Dublin
Wipro
19626 Jobs | Bengaluru
Accenture in India
17497 Jobs | Dublin 2
EY
16057 Jobs | London
Uplers
11768 Jobs | Ahmedabad
Amazon
10704 Jobs | Seattle,WA
Oracle
9513 Jobs | Redwood City
IBM
9439 Jobs | Armonk
Bajaj Finserv
9311 Jobs |
Accenture services Pvt Ltd
8745 Jobs |