Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Comcast brings together the best in media and technology. We drive innovation to create the world's best entertainment and online experiences. As a Fortune 50 leader, we set the pace in a variety of innovative and fascinating businesses and create career opportunities across a wide range of locations and disciplines. We are at the forefront of change and move at an amazing pace, thanks to our remarkable people, who bring cutting-edge products and services to life for millions of customers every day. If you share in our passion for teamwork, our vision to revolutionize industries and our goal to lead the future in media and technology, we want you to fast-forward your career at Comcast. Job Summary Responsible for developing and deploying machine learning algorithms. Evaluates accuracy and functionality of machine learning algorithms. Translates application requirements into machine learning problem statements. Analyzes and evaluates solutions both internally generated as well as third party supplied. Develops novel ways to use machine learning to solve problems and discover new products. Has in-depth experience, knowledge and skills in own discipline. Usually determines own work priorities. Acts as resource for colleagues with less experience. Job Description About the Role: We are seeking an experienced Data Scientist to join our growing Operational Intelligence team. You will play a key role in building intelligent systems that help reduce alert noise, detect anomalies, correlate events, and proactively surface operational insights across our large-scale streaming infrastructure. You’ll work at the intersection of machine learning, observability, and IT operations, collaborating closely with Platform Engineers, SREs, Incident Managers, Operators and Developers to integrate smart detection and decision logic directly into our operational workflows. This role offers a unique opportunity to push the boundaries of AI/ML in large-scale operations. We welcome curious minds who want to stay ahead of the curve, bring innovative ideas to life, and improve the reliability of streaming infrastructure that powers millions of users globally. What You’ll Do Design and tune machine learning models for event correlation, anomaly detection, alert scoring, and root cause inference Engineer features to enrich alerts using service relationships, business context, change history, and topological data Apply NLP and ML techniques to classify and structure logs and unstructured alert messages Develop and maintain real-time and batch data pipelines to process alerts, metrics, traces, and logs Use Python, SQL, and time-series query languages (e.g., PromQL) to manipulate and analyze operational data Collaborate with engineering teams to deploy models via API integrations, automate workflows, and ensure production readiness Contribute to the development of self-healing automation, diagnostics, and ML-powered decision triggers Design and validate entropy-based prioritization models to reduce alert fatigue and elevate critical signals Conduct A/B testing, offline validation, and live performance monitoring of ML models Build and share clear dashboards, visualizations, and reporting views to support SREs, engineers and leadership Participate in incident postmortems, providing ML-driven insights and recommendations for platform improvements Collaborate on the design of hybrid ML + rule-based systems to supportnamic correlation and intelligent alert grouping Lead and support innovation efforts including POCs, POVs, and explorationemerging AI/ML tools and strategies Demonstrate a proactive, solution-oriented mindset with the ability to navigate ambiguity and learn quickly Participate in on-call rotations and provide operational support as needed Qualifications Bachelor's or Master's degree in Computer Science, Data Science, Machine Learning, Statistics or a related field 5+ years of experience building and deploying ML solutions in production environments 2+ years working with AIOps, observability, or real-time operations data Strong coding skills in Python (including pandas, NumPy, Scikit-learn, PyTorch, or TensorFlow) Experience working with SQL, time-series query languages (e.g., PromQL), and data transformation in pandas or Spark Familiarity with LLMs, prompt engineering fundamentals, or embedding-based retrieval (e.g., sentence-transformers, vector DBs) Strong grasp of modern ML techniques including gradient boosting (XGBoost/LightGBM), autoencoders, clustering (e.g., HDBSCAN), and anomaly detection Experience managing structured + unstructured data, and building features from logs, alerts, metrics, and traces Familiarity with real-time event processing using tools like Kafka, Kinesis, or Flink Strong understanding of model evaluation techniques including precision/recall trade-offs, ROC, AUC, calibration Comfortable working with relational (PostgreSQL), NoSQL (MongoDB), and time-series (InfluxDB, Prometheus) databases Ability to collaborate effectively with SREs, platform teams, and participate in Agile/DevOps workflows Clear written and verbal communication skills to present findings to technical and non-technical stakeholders Comfortable working across Git, Confluence, JIRA, & collaborative agile environments Nice To Have Experience building or contributing to the AIOps platform (e.g., Moogsoft, BigPanda, Datadog, Aisera, Dynatrace, BMC etc.) Experience working in streaming media, OTT platforms, or large-scale consumer services Exposure to Infrastructure as Code (Terraform, Pulumi) and modern cloud-native tooling Working experience with Conviva, Touchstream, Harmonic, New Relic, Prometheus, & event- based alerting tools Hands-on experience with LLMs in operational contexts (e.g., classification of alert text, log summarization, retrieval-augmented generation) Familiarity with vector databases (e.g., FAISS, Pinecone, Weaviate) and embeddings-based search for observability data Experience using MLflow, SageMaker, or Airflow for ML workflow orchestration Knowledge of LangChain, Haystack, RAG pipelines, or prompt templating libraries Exposure to MLOps practices (e.g., model monitoring, drift detection, explainability tools like SHAP or LIME) Experience with containerized model deployment using Docker or Kubernetes Use of JAX, Hugging Face Transformers, or LLaMA/Claude/Command-R models in experimentation Experience designing APIs in Python or Go to expose models as services Cloud proficiency in AWS/GCP, especially for distributed training, storage, or batch inferencing Contributions to open-source ML or DevOps communities, or participation in AIOps research/benchmarking efforts. Certifications in cloud architecture, ML engineering, or data science specializations, confluence pages, white papers, presentations, test results, technical manuals, formal recommendations and reports. Contributes to the company by creating patents, Application Programming Interfaces (APIs). Comcast is proud to be an equal opportunity workplace. We will consider all qualified applicants for employment without regard to race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability, veteran status, genetic information, or any other basis protected by applicable law. Base pay is one part of the Total Rewards that Comcast provides to compensate and recognize employees for their work. Most sales positions are eligible for a Commission under the terms of an applicable plan, while most non-sales positions are eligible for a Bonus. Additionally, Comcast provides best-in-class Benefits to eligible employees. We believe that benefits should connect you to the support you need when it matters most, and should help you care for those who matter most. That’s why we provide an array of options, expert guidance and always-on tools, that are personalized to meet the needs of your reality – to help support you physically, financially and emotionally through the big milestones and in your everyday life. Please visit the compensation and benefits summary on our careers site for more details. Education Bachelor's Degree While possessing the stated degree is preferred, Comcast also may consider applicants who hold some combination of coursework and experience, or who have extensive related professional experience. Relevant Work Experience 5-7 Years
Posted 3 days ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Data Engineer, Chennai We’re seeking a highly motivated Data Engineer to join our agile, cross-functional team and drive end-to-end data pipeline development in a cloud-native, big data ecosystem. You’ll leverage ETL/ELT best practices and data lakehouse paradigms to deliver scalable solutions. Proficiency in SQL, Python, Spark, and modern data orchestration tools (e.g. Airflow) is essential, along with experience in CI/CD, DevOps, and containerized environments like Docker and Kubernetes. This is your opportunity to make an impact in a fast-paced, data-driven culture. Responsibilities Responsible for data pipeline development and maintenance Contribute to development, maintenance, testing strategy, design discussions, and operations of the team Participate in all aspects of agile software development including design, implementation, and deployment Responsible for the end-to-end lifecycle of new product features / components Ensuring application performance, uptime, and scale, maintaining high standards of code quality and thoughtful application design Work with a small, cross-functional team on products and features to drive growth Learning new tools, languages, workflows, and philosophies to grow Research and suggest new technologies for boosting the product Have an impact on product development by making important technical decisions, influencing the system architecture, development practices and more Qualifications Excellent team player with strong communication skills B.Sc. in Computer Sciences or similar 3-5 years of experience in Data Pipeline development 3-5 years of experience in PySpark / Databricks 3-5 years of experience in Python / Airflow Knowledge of OOP and design patterns Knowledge of server-side technologies such as Java, Spring Experience with Docker containers, Kubernetes and Cloud environments Expertise in testing methodologies (Unit-testing, TDD, mocking) Fluent with large scale SQL databases Good problem-solving and analysis abilities Requirements - Advantage Experience with Azure cloud services Experience with Agile Development methodologies Experience with Git Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion
Posted 3 days ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About AiSensy AiSensy is a WhatsApp based Marketing & Engagement platform helping businesses like Adani, Delhi Transport Corporation, Yakult, Godrej, Aditya Birla Hindalco, Wipro, Asian Paints, India Today Group, Skullcandy, Vivo, Physicswallah, and Cosco grow their revenues via WhatsApp. Enabling 100,000+ Businesses with WhatsApp Engagement & Marketing 400 Crores+ WhatsApp Messages exchanged between Businesses and Users via AiSensy per year Working with top brands like Delhi Transport Corporation, Vivo, Physicswallah & more High Impact as Businesses drive 25-80% Revenues using AiSensy Platform Mission-Driven and Growth Stage Startup backed by Marsshot.vc, Bluelotus.vc & 50+ Angel Investors Role Overview We are looking for a Senior Machine Learning Engineer to lead the development and deployment of cutting-edge AI/ML systems with a strong focus on LLMs, Retrieval-Augmented Generation (RAG), AI agents , and intelligent automation. You will work closely with cross-functional teams to translate business needs into AI solutions, bringing your expertise in building scalable ML infrastructure, deploying models in production, and staying at the forefront of AI innovation. Key Responsibilities AI & ML System Development Design, develop, and optimize end-to-end ML models using LLMs, transformer architectures, and custom RAG frameworks. Fine-tune and evaluate generative and NLP models for business-specific applications such as conversational flows, auto-replies, and intelligent routing. Lead prompt engineering and build autonomous AI agents capable of executing multi-step reasoning. Infrastructure, Deployment & MLOps Architect and automate scalable training, validation, and deployment pipelines using tools like MLflow, SageMaker, or Vertex AI. Integrate ML models with APIs, databases (vector/graph), and production services ensuring performance, reliability, and traceability. Monitor model performance in real-time, and implement A/B testing, drift detection, and re-training pipelines. Data & Feature Engineering Work with structured, unstructured, and semi-structured data (text, embeddings, chat history). Build and manage vector databases (e.g., Pinecone, Weaviate) and graph-based retrieval systems. Ensure high-quality data ingestion, feature pipelines, and scalable pre-processing workflows. Team Collaboration & Technical Leadership Collaborate with product managers, software engineers, and stakeholders to align AI roadmaps with product goals. Mentor junior engineers and establish best practices in experimentation, reproducibility, and deployment. Stay updated on the latest in AI/ML (LLMs, diffusion models, multi-modal learning), and drive innovation in applied use cases. Required Qualifications Bachelor’s/Master’s degree in Computer Science, Engineering, AI/ML, or a related field from a Tier 1 institution (IIT, NIT, IIIT or equivalent). 2+ years of experience building and deploying machine learning models in production. Expertise in Python and known the frameworks like TensorFlow, PyTorch, Hugging Face, Scikit-learn . Hands-on experience with transformer models , LLMs , LangChain , LangGraph, OpenAI API , or similar. Deep knowledge of machine learning algorithms , model evaluation, hyperparameter tuning, and optimization. Experience working with cloud platforms (AWS, GCP, or Azure) and ML Ops tools (MLflow, Airflow, Kubernetes). Strong understanding of SQL , data engineering concepts, and working with large-scale datasets. Preferred Qualifications Experience with prompt tuning, agentic AI systems , or multi-modal learning . Familiarity with vector search systems (e.g., Pinecone, FAISS, Milvus) and knowledge graphs . Contributions to open-source AI/ML projects or publications in AI journals/conferences. Experience in building conversational AI or smart assistants using WhatsApp or similar messaging APIs. Why Join AiSensy? Build AI that directly powers growth for 100,000+ businesses. Work on cutting-edge technologies like LLMs, RAG, and AI agents in real production environments. High ownership, fast iterations, and impact-focused work. Ready to build intelligent systems that redefine communication? Apply now and join the AI revolution at AiSensy .
Posted 3 days ago
6.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
We are looking for an experienced and highly skilled GCP Data Engineer to join our team. The ideal candidate will be responsible for designing, developing, and maintaining robust data pipelines and cloud-based solutions on Google Cloud Platform to support data integration, transformation, and analytics. You will work closely with data analysts, architects, and other stakeholders to build scalable and efficient data systems. Key Responsibilities: Design, build, and maintain scalable ETL/ELT pipelines using GCP tools such as Cloud Dataflow , BigQuery , Cloud Composer , and Cloud Storage . Develop and optimize data models for analytics and reporting using BigQuery. Implement data quality, data governance, and metadata management best practices. Collaborate with data scientists, analysts, and other engineers to ensure data availability and reliability. Work with streaming data and batch processing frameworks to handle real-time data ingestion and processing. Monitor and troubleshoot data pipeline performance and ensure high availability. Develop automation and orchestration solutions using Cloud Functions , Cloud Composer (Apache Airflow) , or other tools. Ensure security, privacy, and compliance of data solutions in line with organizational and regulatory requirements. Required Skills And Qualifications: Bachelor's degree in Computer Science, Information Technology, Engineering, or related field. 6+ years of experience in data engineering or similar role, with 4+ years specifically on GCP. Strong hands-on experience with BigQuery , Cloud Dataflow , Cloud Pub/Sub , Cloud Storage , and Cloud Functions . Proficient in SQL, Python, and/or Java for data manipulation and automation. Experience with data modeling, warehousing concepts, and data architecture. Familiarity with DevOps practices, CI/CD pipelines, and Infrastructure as Code (e.g., Terraform). Strong problem-solving skills and the ability to work in an agile, collaborative environment. Preferred Qualification: GCP Professional Data Engineer Certification. Experience with Apache Beam, Airflow, Kafka, or similar tools. Understanding of machine learning pipelines and integration with AI/ML tools on GCP. Experience in multi-cloud or hybrid cloud environments. What we offer: Competitive salary and benefits package Flexible working hours and remote work opportunities Opportunity to work with cutting-edge technologies and cloud solutions Collaborative and supportive work culture Career growth and certification support
Posted 4 days ago
56.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Our global technology team is driving innovation and shaping the future of our private credit business through strategic solutions and cutting-edge projects. Join us to contribute to our long-term vision, leveraging both in-house and vendor applications in a collaborative and dynamic environment. At Macquarie, our advantage is bringing together diverse people and empowering them to shape all kinds of possibilities. We are a global financial services group operating in 31 markets and with 56 years of unbroken profitability. You’ll be part of a friendly and supportive team where everyone - no matter what role - contributes ideas and drives outcomes. What role will you play? In this role, you will design and implement high-quality solutions, collaborating closely with the business to deliver key projects integrating multiple systems and data sources. You will apply software engineering best practices, including continuous integration and DevOps to drive impactful revenue-generating projects. This role offers exposure to innovative technologies such as serverless solutions, microservice architecture, delta lake, and cloud-based applications. What You Offer 7+ years of full stack development experience with managing Infrastructure as a Code (IaC) Proven development skills to work with Python, Structured Query Language (SQL) , ReactJS, REST API. Hands-on experience with Amazon Web Services (AWS, EC2, S3, RDS, DynamoDB, Lambda and EBS) for designing scalable, cloud-native, distributed software utilising modern development architectures Experience and working knowledge on finance related projects (especially Private assets) Experience in streamlining the continuous integration/continuous delivery (CI/CD) pipelines with familiarity with containerisation platforms such as Kubernetes, Docker and workflow management platforms such as Airflow. Excellent stakeholder management across front to back office through the full software development lifecycle We love hearing from anyone inspired to build a better future with us, if you're excited about the role or working at Macquarie we encourage you to apply. What We Offer Macquarie employees can access a wide range of benefits which, depending on eligibility criteria, include: Hybrid and flexible working arrangements One wellbeing leave day per year Up to 20 weeks paid parental leave as well as benefits to support you as you transition to life as a working parent Paid volunteer leave and donation matching Other benefits to support your physical, mental and financial wellbeing Access a wide range of learning and development opportunities About Technology Technology enables every aspect of Macquarie, for our people, our customers and our communities. We’re a global team that is passionate about accelerating the digital enterprise, connecting people and data, building platforms and applications and designing tomorrow’s technology solutions. Our commitment to diversity, equity and inclusion We are committed to fostering a diverse, equitable and inclusive workplace. We encourage people from all backgrounds to apply and welcome all identities, including race, ethnicity, cultural identity, nationality, gender (including gender identity or expression), age, sexual orientation, marital or partnership status, parental, caregiving or family status, neurodiversity, religion or belief, disability, or socio-economic background. We welcome further discussions on how you can feel included and belong at Macquarie as you progress through our recruitment process. Our aim is to provide reasonable adjustments to individuals who may need support during the recruitment process and through working arrangements. If you require additional assistance, please let us know in the application process.
Posted 4 days ago
3.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
No Relocation Assistance Offered Job Number #167816 - Mumbai, Maharashtra, India Who We Are Colgate-Palmolive Company is a global consumer products company operating in over 200 countries specializing in Oral Care, Personal Care, Home Care, Skin Care, and Pet Nutrition. Our products are trusted in more households than any other brand in the world, making us a household name! Join Colgate-Palmolive, a caring, innovative growth company reimagining a healthier future for people, their pets, and our planet. Guided by our core values—Caring, Inclusive, and Courageous—we foster a culture that inspires our people to achieve common goals. Together, let's build a brighter, healthier future for all. Brief introduction - Role Summary/Purpose: We are excited to invite applications for the position of Full Stack Developer within our Global Tech Team. This role will support our team in deploying best in class technology to optimize and expand our digital engagement programs leading to better targeting and engagement with our professionals, customers and consumers. We are looking for a highly motivated individual to join our team to help realize our vision. The ideal candidate is very customer focused and can work well both independently and within a team. Candidate needs to be a self-starter- eager to learn and bring to bear new technologies to build the best digital experience. Responsibilities: Architect, Develop & Support web / full stack applications for different multi-functional Projects. Work with a distributed team and propose the right tech stack for the applications. Develops elite user interfaces and user experiences of applications. Implements the server-side logic and functionality of applications. Designs and interacts with databases, ensuring efficient storage and retrieval of data. Writes unit tests, conducts testing, and debugs code to ensure the reliability and functionality of the application. Act as a Full Stack Mentor to other developers in the Team Required Qualifications: Bachelor's Degree or equivalent in Computer Science, Information Technology, Mathematics, Engineering or similar degree At least 3+ years experience designing and deploying end to end web applications At least 3+ years experience with full product life cycle releases A deep understanding of web technologies (JavaScript, HTML, CSS), networking, debugging Experience developing frontend web applications in a reactive modern JavaScript framework such as React, Vue or Angular Demonstrable experience applying test driven development methodologies to sophisticated business problems Relational database technologies Experience in backend languages like Python, NodeJS Optimizing and scaling code in a production environment Handling source code with git Knowledge of and experience applying security standard methodologies and patterns Excellent diagnostic and solving skills Working on Agile/SCRUM development teams Static and dynamic analyzing toolsets Use of user centric design and applying user experience concepts Excellent verbal and written communication skills as well as customer relationship building skills Adapt to and work reliably with a variety of engagements and in high-reaching Strong organization and project management skills with the ability to handle sophisticated projects with many partners. Github, Github Actions, Apache Airflow Preferred Qualifications: Developing applications on cloud platforms (AWS, Azure, GCP) Containerization (Docker or Kubernetes) Experience with Data Flow, Data Pipeline and workflow management tools: Airflow, Airbyte, Cloud Composer, etc. Experience with Data Warehousing solutions: Snowflake, BigQuery, etc Our Commitment to Inclusion Our journey begins with our people—developing strong talent with diverse backgrounds and perspectives to best serve our consumers around the world and fostering an inclusive environment where everyone feels a true sense of belonging. We are dedicated to ensuring that each individual can be their authentic self, is treated with respect, and is empowered by leadership to contribute meaningfully to our business. Equal Opportunity Employer Colgate is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity, sexual orientation, national origin, ethnicity, age, disability, marital status, veteran status (United States positions), or any other characteristic protected by law. Reasonable accommodation during the application process is available for persons with disabilities. Please complete this request form should you require accommodation.
Posted 4 days ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Software Engineer Consultant/Expert – GCP Data Engineer Location: Chennai (Onsite) 34350 Employment Type: Contract Budget: Up to ₹18 LPA Assessment: Google Cloud Platform Engineer (HackerRank or equivalent) Notice Period: Immediate Joiners Preferred Role Summary We are seeking a highly skilled GCP Data Engineer to support the modernization of enterprise data platforms. The ideal candidate will be responsible for designing and implementing scalable, high-performance data pipelines and solutions on Google Cloud Platform (GCP) . You will work with large-scale datasets, integrating legacy and modern systems to enable advanced analytics and AI/ML capabilities. The role requires a deep understanding of GCP services, strong data engineering skills, and the ability to collaborate across teams to deliver robust data solutions. Key Responsibilities Design and develop production-grade data engineering solutions using GCP services such as: BigQuery, Dataflow, Dataform, Dataproc, Cloud Composer, Cloud SQL, Airflow, Compute Engine, Cloud Functions, Cloud Run, Cloud Build, Pub/Sub, App Engine Develop batch and real-time streaming pipelines for data ingestion, transformation, and processing. Integrate data from multiple sources including legacy and cloud-based systems. Collaborate with stakeholders and product teams to gather data requirements and align technical solutions to business needs. Conduct in-depth data analysis and impact assessments for data migrations and transformations. Implement CI/CD pipelines using tools like Tekton, Terraform, and GitHub. Optimize data workflows for performance, scalability, and cost-effectiveness. Lead and mentor junior engineers; contribute to knowledge sharing and documentation. Champion data governance, data quality, security, and compliance best practices. Utilize monitoring/logging tools to proactively address system issues. Deliver high-quality code using Agile methodologies including TDD and pair programming. Required Skills & Experience GCP Data Engineer Certification. Minimum 5+ years of experience designing and implementing complex data pipelines. 3+ years of hands-on experience with GCP. Strong expertise in: SQL, Python, Java, or Apache Beam Airflow, Dataflow, Dataproc, Dataform, Data Fusion, BigQuery, Cloud SQL, Pub/Sub Infrastructure-as-Code tools such as Terraform DevOps tools: GitHub, Tekton, Docker Solid understanding of microservice architecture, CI/CD integration, and container orchestration. Experience with data security, governance, and compliance in cloud environments. Preferred Qualifications Experience with real-time data streaming using Apache Kafka or Pub/Sub. Exposure to AI/ML tools or integration with AI/ML pipelines. Working knowledge of data science principles applied on large datasets. Experience in a regulated domain (e.g., financial services or insurance). Experience with project management and agile tools (e.g., JIRA, Confluence). Strong analytical and problem-solving mindset. Effective communication skills and ability to collaborate with cross-functional teams. Education Required: Bachelor's degree in Computer Science, Engineering, or a related technical field. Preferred: Master's degree or certifications in relevant domains. Skills: github,bigquery,airflow,ml,pub/sub,terraform,python,apache beam,dataflow,gcp,gcp data engineer certification,tekton,java,dataform,docker,data fusion,sql,dataproc,cloud sql,cloud
Posted 4 days ago
0.0 - 2.0 years
3 - 10 Lacs
Niranjanpur, Indore, Madhya Pradesh
Remote
Job Title - Sr. Data Engineer Experience - 2+ Years Location - Indpre (onsite) Industry - IT Job Type - Full ime Roles and Responsibilities- 1. Design and develop scalable data pipelines and workflows for data ingestion, transformation, and integration. 2. Build and maintain data storage systems, including data warehouses, data lakes, and relational databases. 3. Ensure data accuracy, integrity, and consistency through validation and quality assurance processes. 4. Collaborate with data scientists, analysts, and business teams to understand data needs and deliver tailored solutions. 5. Optimize database performance and manage large-scale datasets for efficient processing. 6. Leverage cloud platforms (AWS, Azure, or GCP) and big data technologies (Hadoop, Spark, Kafka) for building robust data solutions. 7. Automate and monitor data workflows using orchestration frameworks such as Apache Airflow. 8. Implement and enforce data governance policies to ensure compliance and data security. 9. Troubleshoot and resolve data-related issues to maintain seamless operations. 10. Stay updated on emerging tools, technologies, and trends in data engineering. Skills and Knowledge- 1. Core Skills: ● Proficient in Python (libraries: Pandas, NumPy) and SQL. ● Knowledge of data modeling techniques, including: ○ Entity-Relationship (ER) Diagrams ○ Dimensional Modeling ○ Data Normalization ● Familiarity with ETL processes and tools like: ○ Azure Data Factory (ADF) ○ SSIS (SQL Server Integration Services) 2. Cloud Expertise: ● AWS Services: Glue, Redshift, Lambda, EKS, RDS, Athena ● Azure Services: Databricks, Key Vault, ADLS Gen2, ADF, Azure SQL ● Snowflake 3. Big Data and Workflow Automation: ● Hands-on experience with big data technologies like Hadoop, Spark, and Kafka. ● Experience with workflow automation tools like Apache Airflow (or similar). Qualifications and Requirements- ● Education: ○ Bachelor’s degree (or equivalent) in Computer Science, Information Technology, Engineering, or a related field. ● Experience: ○ Freshers with strong understanding, internships and relevant academic projects are welcome. ○ 2+ years of experience working with Python, SQL, and data integration or visualization tools is preferred. ● Other Skills: ○ Strong communication skills, especially the ability to explain technical concepts to non-technical stakeholders. ○ Ability to work in a dynamic, research-oriented team with concurrent projects. Job Types: Full-time, Permanent Pay: ₹300,000.00 - ₹1,000,000.00 per year Benefits: Paid sick time Provident Fund Work from home Schedule: Day shift Monday to Friday Weekend availability Supplemental Pay: Performance bonus Ability to commute/relocate: Niranjanpur, Indore, Madhya Pradesh: Reliably commute or planning to relocate before starting work (Preferred) Experience: Data Engineer: 2 years (Preferred) Work Location: In person Application Deadline: 31/08/2025
Posted 4 days ago
7.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Software Engineer – Senior (Full Stack Backend – Java) Location: Chennai (Onsite) Employment Type: Contract Budget: Up to ₹22 LPA 34347 Assessment: Full Stack Backend – Java (via HackerRank or equivalent platform) Notice Period: Immediate Joiners Preferred Role Overview We are seeking a highly skilled Senior Software Engineer with expertise in backend development, microservices architecture, and cloud-native technologies. The selected candidate will be part of a collaborative product team responsible for developing and deploying REST APIs and microservices for digital platforms. The role involves working in a fast-paced agile environment, contributing to both engineering excellence and product innovation. Key Responsibilities Design, develop, test, and deploy high-quality, scalable backend systems and APIs. Collaborate with cross-functional teams including product managers, designers, and QA engineers to deliver customer-centric solutions. Write clean, maintainable, and well-documented code following industry best practices. Participate in pair programming, code reviews, and test-driven development. Contribute to defining architecture and service-level objectives. Conduct proof-of-concepts for new capabilities and features. Drive continuous improvement in code quality, testing, and deployment processes. Required Skills 7+ years of hands-on experience in software engineering with a focus on backend development or full-stack engineering. Strong expertise in Java and microservices architecture. Solid understanding and working knowledge of: Google Cloud Platform (GCP) services including BigQuery, Dataflow, Dataproc, Data Fusion, Cloud SQL, and Airflow. Infrastructure as Code (IaC) tools like Terraform. CI/CD tools such as Tekton. Databases: PostgreSQL, Cloud SQL. Programming/scripting: Python, PySpark. Building and consuming RESTful APIs. Preferred Qualifications Experience with containerization and orchestration tools. Familiarity with monitoring tools and service-level indicators (SLIs/SLAs). Exposure to agile frameworks like Extreme Programming (XP), Scrum, or Kanban. Education Required: Bachelor's degree in Computer Science, Engineering, or a related technical discipline. Skills: restful apis,pyspark,tekton,data fusion,bigquery,cloud sql,microservices architecture,microservices,software,terraform,postgresql,dataflow,code,cloud,dataproc,google cloud platform (gcp),ci/cd,airflow,python,java
Posted 4 days ago
4.0 - 6.0 years
0 Lacs
New Delhi, Delhi, India
On-site
Max. Salary: 80K per month Working Model: Hybrid (2-3 day from Bangalore office ( tentative location- Indiranagar, HSR, Bellandur). Min. Exp: 4-6 Years Role: Full Time Must have GCP, or any cloud experience Docker and/or k8's experience Nice to have CI/CD pipelines Apache Airflow, dbt's Working experience of building data platforms. Experience integrating many open source platforms and coordinating workflows Vertical AI solutions development - using LLM's Working exposure to AI assisted coding - cursor, claude-code, github co-pilot etc. They'll be working along with my US team (Chicago based), so expect a few hours overlap. Initial couple of weeks this will be a must. Later on can be based on requirements and work basis. Daily global stand-ups, so expect them to be available for the call daily Job Type: Full-time Pay: ₹70,000.00 - ₹80,000.00 per month
Posted 4 days ago
3.0 years
5 - 10 Lacs
Kazhakuttam
On-site
About the Role You will architect, build and maintain end-to-end data pipelines that ingest 100 GB+ of NGINX/web-server logs from Elasticsearch, transform them into high-quality features, and surface actionable insights and visualisations for security analysts and ML models. Acting as both a Data Engineer and a Behavioural Data Analyst, you will collaborate with security, AI and frontend teams to ensure low-latency data delivery, rich feature sets and compelling dashboards that spot anomalies in real time. Key Responsibilities ETL & Pipeline Engineering: Design and orchestrate scalable batch / near-real-time ETL workflows to extract raw logs from Elasticsearch. Clean, normalize and partition logs for long-term storage and fast retrieval. Optimize Elasticsearch indices, queries and retention policies for performance and cost. Feature Engineering & Feature Store: Assist in the development of robust feature-engineering code in Python and/or PySpark. Define schemas and loaders for a feature store (Feast or similar). Manage historical back-fills and real-time feature look-ups ensuring versioning and reproducibility. Behaviour & Anomaly Analysis: Perform exploratory data analysis (EDA) to uncover traffic patterns, bursts, outliers and security events across IPs, headers, user agents and geo data. Translate findings into new or refined ML features and anomaly indicators. Visualisation & Dashboards: Create time-series, geo-distribution and behaviour-pattern visualisations for internal dashboards. Partner with frontend engineers to test UI requirements. Monitoring & Scaling: Implement health and latency monitoring for pipelines; automate alerts and failure recovery. Scale infrastructure to support rapidly growing log volumes. Collaboration & Documentation: Work closely with ML, security and product teams to align data strategy with platform goals. Document data lineage, dictionaries, transformation logic and behavioural assumptions. Minimum Qualifications: Education – Bachelor’s or Master’s in Computer Science, Data Engineering, Analytics, Cybersecurity or related field. Experience – 3 + years building data pipelines and/or performing data analysis on large log datasets. Core Skills Python (pandas, numpy, elasticsearch-py, Matplotlib, plotly, seaborn; PySpark desirable) Elasticsearch & ELK stack query optimisation SQL for ad-hoc analysis Workflow orchestration (Apache Airflow, Prefect or similar) Data modelling, versioning and time-series handling Familiarity with visualisation tools (Kibana, Grafana). DevOps – Docker, Git, CI/CD best practices. Nice-to-Have Kafka, Fluentd or Logstash experience for high-throughput log streaming. Web-server log expertise (NGINX / Apache, HTTP semantics) Cloud data platform deployment on AWS / GCP / Azure. Hands-on exposure to feature stores (Feast, Tecton) and MLOps. Prior work on anomaly-detection or cybersecurity analytics systems. Why Join Us? You’ll sit at the nexus of data engineering and behavioural analytics, turning raw traffic logs into the lifeblood of a cutting-edge AI security product. If you thrive on building resilient pipelines and diving into the data to uncover hidden patterns, we’d love to meet you. Job Type: Full-time Pay: ₹500,000.00 - ₹1,000,000.00 per year Benefits: Health insurance Provident Fund Schedule: Day shift Monday to Friday Work Location: In person
Posted 4 days ago
6.0 years
6 - 9 Lacs
Hyderābād
On-site
CACI International Inc is an American multinational professional services and information technology company headquartered in Northern Virginia. CACI provides expertise and technology to enterprise and mission customers in support of national security missions and government transformation for defense, intelligence, and civilian customers. CACI has approximately 23,000 employees worldwide. Headquartered in London, CACI Ltd is a wholly owned subsidiary of CACI International Inc., a publicly listed company on the NYSE with annual revenue in excess of US $6.2bn. Founded in 2022, CACI India is an exciting, growing and progressive business unit of CACI Ltd. CACI Ltd currently has over 2000 intelligent professionals and are now adding many more from our Hyderabad and Pune offices. Through a rigorous emphasis on quality, the CACI India has grown considerably to become one of the UKs most well-respected Technology centres. About Data Platform: The Data Platform will be built and managed “as a Product” to support a Data Mesh organization. The Data Platform focusses on enabling decentralized management, processing, analysis and delivery of data, while enforcing corporate wide federated governance on data, and project environments across business domains. The goal is to empower multiple teams to create and manage high integrity data and data products that are analytics and AI ready, and consumed internally and externally. What does a Data Infrastructure Engineer do? A Data Infrastructure Engineer will be responsible to develop, maintain and monitor the data platform infrastructure and operations. The infrastructure and pipelines you build will support data processing, data analytics, data science and data management across the CACI business. The data platform infrastructure will conform to a zero trust, least privilege architecture, with a strict adherence to data and infrastructure governance and control in a multi-account, multi-region AWS environment. You will use Infrastructure as Code and CI/CD to continuously improve, evolve and repair the platform. You will be able to design architectures and create re-useable solutions to reflect the business needs. Responsibilities will include: Collaborating across CACI departments to develop and maintain the data platform Building infrastructure and data architectures in Cloud Formation, and SAM. Designing and implementing data processing environments and integrations using AWS PaaS such as Glue, EMR, Sagemaker, Redshift, Aurora and Snowflake Building data processing and analytics pipelines as code, using python, SQL, PySpark, spark, CloudFormation, lambda, step functions, Apache Airflow Monitoring and reporting on the data platform performance, usage and security Designing and applying security and access control architectures to secure sensitive data You will have: 6+ years of experience in a Data Engineering role. Strong experience and knowledge of data architectures implemented in AWS using native AWS services such as S3, DataZone, Glue, EMR, Sagemaker, Aurora and Redshift. Experience administrating databases and data platforms Good coding discipline in terms of style, structure, versioning, documentation and unit tests Strong proficiency in Cloud Formation, Python and SQL Knowledge and experience of relational databases such as Postgres, Redshift Experience using Git for code versioning, and lifecycle management Experience operating to Agile principles and ceremonies Hands-on experience with CI/CD tools such as GitLab Strong problem-solving skills and ability to work independently or in a team environment. Excellent communication and collaboration skills. A keen eye for detail, and a passion for accuracy and correctness in numbers Whilst not essential, the following skills would also be useful: Experience using Jira, or other agile project management and issue tracking software Experience with Snowflake Experience with Spatial Data Processing
Posted 4 days ago
8.0 years
4 - 8 Lacs
Hyderābād
On-site
Ready to build the future with AI? At Genpact, we don’t just keep up with technology—we set the pace. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory , our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, innovation-driven environment, love building and deploying cutting-edge AI solutions, and want to push the boundaries of what’s possible, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . We are inviting applications for the role of Lead Consultant– Data Engineer! This role supports business enablement, which includes understanding of business trends, provide data driven solutions at scale. Hire will be responsible for developing, expanding and optimizing our data pipeline architecture, as well as optimizing data flow and collaboration from cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from ground up either in on-prem or in cloud (AWS/Azure). The data engineer will support our software developers, database architects, data analyst and data scientists on data initiatives and will ensure optimal data delivery architecture. Core data engineering work experience in Life Sciences/Healthcare/CPG for minimum 8+ years Work location: Bangalore Responsibilities Good years of professional experience in creating and maintaining optimal data pipeline architecture. Assemble large, complex data sets that meet functional/non-functional business requirements Experience working on warehousing systems, and an ability to contribute towards implanting end-to-end, loosely couple/decoupled technology solutions for data ingestion and processing, data storage, data access, and integration with business user centric analytics/business intelligence frameworks Advanced working SQL knowledge and experience working with relational databases, query authoring as well as working familiarity with a variety of databases A successful history of manipulating, processing and extracting value from large disconnected datasets Design, develop, and maintain scalable and resilient ETL/ELT pipelines for handling large volumes of complex data Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS/Azure big data toolsets Architecting and implementing data governance and security for data platforms on cloud Cloud certification will be an advantage but not a mandate for this role Experience in following software/ tools : Experience with relational SQL and NoSQL databases, including Postgres and MongoDB Experience with big data tools : Hadoop, Spark, Kafka, etc Experience with data pipeline and workflow management tools : Airflow, Luigi, etc Experience with AWS Cloud services or Azure cloud services Experience with scripting languages: Python or Java Understanding with stream-processing systems: Spark-Streaming, etc Strong project management and organizational skills Ability to comprehend business needs, convert them into BRD & TRD (Business/Technical requirement document), develop implementation roadmap and execute on time Effectively respond to requests for ad hoc analyses. Good verbal and written communication skills Ownership of tasks assigned without supervisory follow-up Proactive planner and can work independently to manage own responsibilities Personal drive and positive work ethic to deliver results within tight deadlines and in demanding situations Qualifications Minimum qualifications Master’s or bachelor’s in engineering - BE/B- Tech, BCA, MCA, BSc/MSc Master’s in science or related Why join Genpact? Lead AI-first transformation – Build and scale AI solutions that redefine industries Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career —Gain hands-on experience, world-class training, mentorship, and AI certifications to advance your skills Grow with the best – Learn from top engineers, data scientists, and AI experts in a dynamic, fast-moving workplace Committed to ethical AI – Work in an environment where governance, transparency, and security are at the core of everything we build Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the 140,000+ coders, tech shapers, and growth makers at Genpact and take your career in the only direction that matters: Up . Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Lead Consultant Primary Location India-Hyderabad Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jul 30, 2025, 1:44:51 AM Unposting Date Ongoing Master Skills List Digital Job Category Full Time
Posted 4 days ago
5.0 years
7 - 9 Lacs
Hyderābād
On-site
Location: Hyderabad, Telangana Time type: Full time Job level: Senior Associate Job type: Regular Category: Technology Consulting ID: JR111910 About us We are the leading provider of professional services to the middle market globally, our purpose is to instill confidence in a world of change, empowering our clients and people to realize their full potential. Our exceptional people are the key to our unrivaled, inclusive culture and talent experience and our ability to be compelling to our clients. You’ll find an environment that inspires and empowers you to thrive both personally and professionally. There’s no one like you and that’s why there’s nowhere like RSM. Snowflake Engineer We are currently seeking an experienced Snowflake Engineer for our Data Analytics team. This role involves designing, building, and maintaining our Snowflake cloud data warehouse. Candidates should have strong Snowflake, SQL, and cloud data solutions experience. Responsibilities Design, develop, and maintain efficient and scalable data pipelines in Snowflake, encompassing data ingestion, transformation, and loading (ETL/ELT). Implement and manage Snowflake security, including role-based access control, network policies, and data encryption. Develop and maintain data models optimized for analytical reporting and business intelligence. Collaborate with data analysts, scientists, and stakeholders to understand data requirements and translate them into technical solutions. Monitor and troubleshoot Snowflake performance, identifying and resolving bottlenecks. Automate data engineering processes using scripting languages (e.g., Python, SQL) and orchestration tools (e.g., Airflow, dbt). Designing, developing, and deploying APIs within Snowflake using stored procedures and user-defined functions (UDFs) Lead and mentor a team of data engineers and analysts, providing technical guidance, coaching, and professional development opportunities. Stay current with the latest Snowflake features and best practices. Contribute to the development of data engineering standards and best practices. Document data pipelines, data models, and other technical specifications. Qualifications Bachelor’s degree or higher in computer science, Information Technology, or a related field. A minimum of 5 years of experience in data engineering and management, including over 3 years of working with Snowflake. Strong understanding of data warehousing concepts, including dimensional modeling, star schemas, and snowflake schemas. Proficiency in SQL and experience with data transformation and manipulation. Experience with ETL/ELT tools and processes. Experience with Apache Iceberg. Strong analytical and problem-solving skills. Excellent communication and collaboration skills. Preferred qualifications Snowflake certifications (e.g., SnowPro Core Certification). Experience with scripting languages (e.g., Python) and automation tools (e.g., Airflow, dbt). Experience with cloud platforms (e.g., AWS, Azure, GCP). Experience with data visualization tools (e.g., Tableau, Power BI). Experience with Agile development methodologies. Experience with Snowflake Cortex, including Cortex Analyst, Arctic TILT, and Snowflake AI & ML Studio. At RSM, we offer a competitive benefits and compensation package for all our people. We offer flexibility in your schedule, empowering you to balance life’s demands, while also maintaining your ability to serve clients. Learn more about our total rewards at https://rsmus.com/careers/india.html. RSM does not tolerate discrimination and/or harassment based on race; colour; creed; sincerely held religious beliefs, practices or observances; sex (including pregnancy or disabilities related to nursing); gender (including gender identity and/or gender expression); sexual orientation; HIV Status; national origin; ancestry; familial or marital status; age; physical or mental disability; citizenship; political affiliation; medical condition (including family and medical leave); domestic violence victim status; past, current or prospective service in the Indian Armed Forces; Indian Armed Forces Veterans, and Indian Armed Forces Personnel status; pre-disposing genetic characteristics or any other characteristic protected under applicable provincial employment legislation. Accommodation for applicants with disabilities is available upon request in connection with the recruitment process and/or employment/partnership. RSM is committed to providing equal opportunity and reasonable accommodation for people with disabilities. If you require a reasonable accommodation to complete an application, interview, or otherwise participate in the recruiting process, please send us an email at careers@rsmus.com.
Posted 4 days ago
2.0 years
7 - 7 Lacs
Gurgaon
On-site
Job Purpose As a key member of the DTS team, seeking an exceptionally talented software developer who is willling to relocate to Costa Rica and to join with a strong background in building robust and scalable backend systems. You will be part of a team that plays a critical role in supporting research and portfolio generation through advanced technology solutions. The team is involved throughout the entire software development lifecycle—from planning and development to deployment and operations—and also provides second-line production support. Desired Skills and Experience Essential skills 2+ years of hands-on experience as a developer Strong Data Structures, Algorithms fundamentals Knowledge of Python and Linux/Unix platforms; familiarity with scripting languages Experience designing and maintaining distributed system architectures Hands-on experience developing and maintaining backend services in Python Familiarity with data processing and orchestration technologies such as Spark, Kafka, Airflow, and Kubernetes Experience with monitoring tools like Prometheus, Grafana, Sentry, and Alerta Experience in finance is a plus Key Responsibilities Design and develop scalable, robust software applications with a focus on backend systems and data-intensive workflows. Build and maintain complex data pipelines and frameworks for strategy and performance analytics Work with technologies such as Spark, Kafka, Kubernetes, and modern monitoring tools. Apply strong debugging and problem-solving skills to ensure system reliability and performance. Demonstrate a solid understanding of data structures, algorithms, object-oriented programming, and MVC web frameworks. Operate effectively in Unix/Linux environments with exposure to caching tools, queuing systems, and data visualization platforms. Key Metrics Python, and Spark Software Engineer, Data Structures and Algorithms Behavioral Competencies Good communication English (verbal and written), Critical thinking, Attention to detail Experience in managing client stakeholders
Posted 4 days ago
0.0 - 5.0 years
5 - 19 Lacs
HSR Layout, Bengaluru, Karnataka
On-site
Data Engineering / Tech Lead – Experience: 4+ years About Company InspironLabs is a GenAI-driven software services company focused on building AI-powered, scalable digital solutions. Our skilled team delivers intelligent applications tailored to specific business challenges, using AI and Generative AI (GenAI) to accelerate innovation. Key strengths include: AI & GenAI Focus – Harnessing AI and Generative AI to deliver smarter solutions. Scalable Tech Stack – Building future-ready systems for performance and resilience. Proven Enterprise Experience – Deploying solutions across industries and geographies. To know more, visit: www.inspironlabs.com Key Responsibilities • Design, implement, and maintain robust data pipelines. • Collaborate with data scientists and analysts for integrated solutions. • Mentor junior engineers and manage project timelines. Required Skills • Experience with Spark, Hadoop, Kafka. • Expertise in SQL, Python, cloud data platforms (AWS/GCP/Azure). • Hands-on with orchestration tools like Airflow, DBT. Qualifications Experience: 4 to 5 years in data engineering roles. Bachelor’s in Computer Science, Engineering, or related field. Place of Work In Office – Bangalore Job Type Full Time Job Type: Full-time Pay: ₹560,716.53 - ₹1,944,670.55 per year Benefits: Flexible schedule Health insurance Paid sick time Paid time off Provident Fund Ability to commute/relocate: HSR Layout, Bengaluru, Karnataka: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Please mention your notice period? What is your current CTC? Work Location: In person
Posted 4 days ago
6.0 years
15 - 18 Lacs
Indore
On-site
Location: Indore Experience: 6+ Years Work Type : Hybrid Notice Period : 0-30 Days joiners We are hiring for a Digital Transformation Consulting firm that specializes in the Advisory and implementation of AI, Automation, and Analytics strategies for the Healthcare providers. The company is headquartered in NJ, USA and its India office is in Indore, MP. Job Description: We are seeking a highly skilled Tech Lead with expertise in database management, data warehousing, and ETL pipelines to drive the data initiatives in the company. The ideal candidate will lead a team of developers, architects, and data engineers to design, develop, and optimize data solutions. This role requires hands-on experience in database technologies, data modeling, ETL processes, and cloud-based data platforms. Key Responsibilities: Lead the design, development, and maintenance of scalable database, data warehouse, and ETL solutions. Define best practices for data architecture, modeling, and governance. Oversee data integration, transformation, and migration strategies. Ensure high availability, performance tuning, and optimization of databases and ETL pipelines. Implement data security, compliance, and backup strategies. Required Skills & Qualifications: 6+ years of experience in database and data engineering roles. Strong expertise in SQL, NoSQL, and relational database management systems (RDBMS). Hands-on experience with data warehousing technologies (e.g., Snowflake, Redshift, BigQuery). Deep understanding of ETL tools and frameworks (e.g., Apache Airflow, Talend, Informatica). Experience with cloud data platforms (AWS, Azure, GCP). Proficiency in programming/scripting languages (Python, SQL, Shell scripting). Strong problem-solving, leadership, and communication skills. Preferred Skills (Good to Have): Experience with big data technologies (Hadoop, Spark, Kafka). Knowledge of real-time data processing. Exposure to AI/ML technologies and working with ML algorithms Job Types: Full-time, Permanent Pay: ₹1,500,000.00 - ₹1,800,000.00 per year Schedule: Day shift Application Question(s): We must fill this position urgently. Can you start immediately? Have you held a lead role in the past? Experience: Extract, Transform, Load (ETL): 6 years (Required) Python: 5 years (Required) big data technologies (Hadoop, Spark, Kafka): 6 years (Required) Snowflake: 6 years (Required) Data warehouse: 6 years (Required) Location: Indore, Madhya Pradesh (Required) Work Location: In person
Posted 4 days ago
0 years
0 Lacs
Ghaziabad, Uttar Pradesh, India
Remote
Job Description Position: Data Engineer Intern Location: Remote Duration: 2-6 months Company: Collegepur Type: Unpaid Internship About the Internship: We are seeking a skilled Data Engineer to join our team, with a focus on cloud data storage, ETL processes, and database/data warehouse management. If you are passionate about building robust data solutions and enabling data-driven decision-making, we want to hear from you! Key Responsibilities: 1. Design, develop, and maintain scalable data pipelines to process large datasets from multiple sources, both structured and unstructured. 2. Implement and optimize ETL (Extract, Transform, Load) processes to integrate, clean, and transform data for analytical use. 3. Manage and enhance cloud-based data storage solutions, including data lakes and data warehouses, using platforms such as AWS, Azure, or Google Cloud. 4. Ensure data security, privacy, and compliance with relevant standards and regulations. 5. Collaborate with data scientists, analysts, and software engineers to support data-driven projects and business processes. 6. Monitor and troubleshoot data pipelines to ensure efficient, real-time, and batch data processing. 7. Maintain comprehensive documentation and data mapping across multiple systems. Requirements: 1. Proven experience with cloud platforms (AWS, Azure, or Google Cloud). 2. Strong knowledge of database systems, data warehousing, and data modeling. 3. Proficiency in programming languages such as Python, Java, or Scala. 4. Experience with ETL tools and frameworks (e.g., Airflow, Informatica, Talend). 5. Familiarity with data security, compliance, and governance practices. 6. Excellent analytical, problem-solving, and communication skills. 7. Bachelor’s degree in Computer Science, Information Technology, or related field.
Posted 4 days ago
10.0 years
0 Lacs
India
On-site
Orion Innovation is a premier, award-winning, global business and technology services firm. Orion delivers game-changing business transformation and product development rooted in digital strategy, experience design, and engineering, with a unique combination of agility, scale, and maturity. We work with a wide range of clients across many industries including financial services, professional services, telecommunications and media, consumer products, automotive, industrial automation, professional sports and entertainment, life sciences, ecommerce, and education. Data Engineer Locations- Kochi/Chennai/Coimbatore/Mumbai/Pune/Hyderabad Job Overview : We are seeking a highly skilled and experienced Senior Data Engineer to join our growing data team. The ideal candidate will have deep expertise in Azure Databricks and Python, and experience building scalable data pipelines. Familiarity with Data Fabric architectures is a plus. You’ll work closely with data scientists, analysts, and business stakeholders to deliver robust data solutions that drive insights and innovation. Key Responsibilities Design, build, and maintain large-scale, distributed data pipelines using Azure Databricks and Py Spark. Design, build, and maintain large-scale, distributed data pipelines using Azure Data Factory Develop and optimize data workflows and ETL processes in Azure Cloud environments. Write clean, maintainable, and efficient code in Python for data engineering tasks. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. Monitor and troubleshoot data pipelines for performance and reliability issues. Implement data quality checks, validations, and ensure data lineage and governance. Contribute to the design and implementation of a Data Fabric architecture (desirable). Required Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. 5–10 years of experience in data engineering or related roles. Expertise in Azure Databricks, Delta Lake, and Spark. Strong proficiency in Python, especially in a data processing context. Experience with Azure Data Lake, Azure Data Factory, and related Azure services. Hands-on experience in building data ingestion and transformation pipelines. Familiarity with CI/CD pipelines and version control systems (e.g., Git). Good To Have Experience or understanding of Data Fabric concepts (e.g., data virtualization, unified data access, metadata-driven architectures). Knowledge of modern data warehousing and lakehouse principles. Exposure to tools like Apache Airflow, dbt, or similar. Experience working in agile/scrum environments. DP-500 and DP-600 Certifications What We Offer Competitive salary and performance-based bonuses. Flexible work arrangements. Opportunities for continuous learning and career growth. A collaborative, inclusive, and innovative work culture. www.orioninc.com (21) Orion Innovation: Company Page Admin | LinkedIn Orion is an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, creed, religion, sex, sexual orientation, gender identity or expression, pregnancy, age, national origin, citizenship status, disability status, genetic information, protected veteran status, or any other characteristic protected by law. Candidate Privacy Policy Orion Systems Integrators, LLC And Its Subsidiaries And Its Affiliates (collectively, “Orion,” “we” Or “us”) Are Committed To Protecting Your Privacy. This Candidate Privacy Policy (orioninc.com) (“Notice”) Explains What information we collect during our application and recruitment process and why we collect it; How we handle that information; and How to access and update that information. Your use of Orion services is governed by any applicable terms in this notice and our general Privacy Policy.
Posted 4 days ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Min Experience:4.0 years Max Experience:8.0 years Skills:Kubenetes,Pyspark, docker, Gitlab,dbt, python,Reliability, Angular 2,Grafana,AWS,Monitoring and Observability Location:PuneJob description: Company Overview Bridgenext is a Global consulting company that provides technology-empowered business solutions for world-class organizations. Our Global Workforce of over 800 consultants provide best in class services to our clients to realize their digital transformation journey. Our clients span the emerging, mid-market and enterprise space. With multiple offices worldwide, we are uniquely positioned to deliver digital solutions to our clients leveraging Microsoft, Java and Open Source with a focus on Mobility, Cloud, Data Engineering and Intelligent Automation. Emtec’s singular mission is to create “Clients for Life” - long-term relationships that deliver rapid, meaningful, and lasting business value. At Bridgenext, we have a unique blend of Corporate and Entrepreneurial cultures. This is where you would have an opportunity to drive business value for clients while you innovate and continue to grow and have fun while doing it. You would work with team members who are vibrant, smart and passionate and they bring their passion to all that they do – whether it’s learning, giving back to our communities or always going the extra mile for our client. Position Description We are looking for members with hands-on Data Engineering experience who will work on the internal and customer-based projects for Bridgenext. We are looking for someone who cares about the quality of code and who is passionate about providing the best solution to meet the client needs and anticipate their future needs based on an understanding of the market. Someone who worked on Hadoop projects including processing and data representation using various AWS Services. Must Have Skills: · 4-8 years of overall experience · Strong programming experience with Python and ability to write modular code following best practices in python which is backed by unit tests with high degree of coverage. · Knowledge of source control(Git/Gitlabs) · Understanding of deployment patterns along with knowledge of CI/CD and build tools · Knowledge of Kubernetes concepts and commands is a must · Knowledge of monitoring and alerting tools like Grafana, Open telemetry is a must · Knowledge of Astro/Airflow is plus · Knowledge of data governance is a plus · Experience with Cloud providers, preferably AWS · Experience with PySpark, Snowflake and DBT good to have. Professional Skills: Solid written, verbal, and presentation communication skills Strong team and individual player Maintains composure during all types of situations and is collaborative by nature High standards of professionalism, consistently producing high-quality results Self-sufficient, independent requiring very little supervision or intervention Demonstrate flexibility and openness to bring creative solutions to address issues
Posted 4 days ago
4.0 years
0 Lacs
Andhra Pradesh, India
On-site
Job Title: Data Engineer (4+ Years Experience) Location: Pan India Job Type: Full-Time Experience: 4+ Years Notice Period: Immediate to 30 days preferred Job Summary We are looking for a skilled and motivated Data Engineer with over 4+ years of experience in building and maintaining scalable data pipelines. The ideal candidate will have strong expertise in AWS Redshift and Python/PySpark, with exposure to AWS Glue, Lambda, and ETL tools being a plus. You will play a key role in designing robust data solutions to support analytical and operational needs across the organization. Key Responsibilities Design, develop, and optimize large-scale ETL/ELT data pipelines using PySpark or Python. Implement and manage data models and workflows in AWS Redshift. Work closely with analysts, data scientists, and stakeholders to understand data requirements and deliver reliable solutions. Perform data validation, cleansing, and transformation to ensure high data quality. Build and maintain automation scripts and jobs using Lambda and Glue (if applicable). Ingest, transform, and manage data from various sources into cloud-based data lakes (e.g., S3). Participate in data architecture and platform design discussions. Monitor pipeline performance, troubleshoot issues, and ensure data reliability. Document data workflows, processes, and infrastructure components. Required Skills 4+ years of hands-on experience as a Data Engineer. Strong proficiency in AWS Redshift including schema design, performance tuning, and SQL development. Expertise in Python and PySpark for data manipulation and pipeline development. Experience working with structured and semi-structured data (JSON, Parquet, etc.). Deep knowledge of data warehouse design principles including star/snowflake schemas and dimensional modeling. Good To Have Working knowledge of AWS Glue and building serverless ETL pipelines. Experience with AWS Lambda for lightweight processing and orchestration. Exposure to ETL tools like Informatica, Talend, or Apache Nifi. Familiarity with workflow orchestrators (e.g., Airflow, Step Functions). Knwledge of DevOps practices, version control (Git), and CI/CD pipelines. Preferred Qualifications Bachelor degree in Computer Science, Engineering, or related field. AWS certifications (e.g., AWS Certified Data Analytics, Developer Associate) are a plus.
Posted 4 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Must have Strong Postgres DB Knowledge . Writing procedures and functions ,Writing dynamic code , Performance tuning in PostgreSQL and complex queries , UNIX. Good to have : IDMC or any other ETL tool knowledge, Airflow DAG , python , MS calls.
Posted 4 days ago
0.0 - 15.0 years
83 - 104 Lacs
Delhi, Delhi
On-site
Job Title: Data Architect (Leadership Role) Company : Wingify Location : Delhi (Outstation Candidates Allowed) Experience Required : 10 – 15 years Working Days : 5 days/week Budget : 83 Lakh to 1.04 Cr About Us We are a fast-growing product-based tech company known for its flagship product VWO—a widely adopted A/B testing platform used by over 4,000 businesses globally, including Target, Disney, Sears, and Tinkoff Bank. The team is self-organizing, highly creative, and passionate about data, tech, and continuous innovation. About us Company Size: Mid-Sized Industry : Consumer Internet, Technology, Consulting Role & Responsibilities Lead and mentor a team of Data Engineers, ensuring performance and career development. Architect scalable and reliable data infrastructure with high availability. Define and implement data governance frameworks, compliance, and best practices. Collaborate cross-functionally to execute the organization’s data roadmap. Optimize data processing workflows for scalability and cost efficiency. Ensure data quality, privacy, and security across platforms. Drive innovation and technical excellence across the data engineering function. Ideal Candidate Must-Haves Experience : 10+ years in software/data engineering roles. At least 2–3+ years in a leadership role managing teams of 5+ Data Engineers. Proven hands-on experience setting up data engineering systems from scratch (0 → 1 stage) in high-growth B2B product companies. Technical Expertise: Strong in Java (preferred), or Python, Node.js, GoLang. Expertise in big data tools: Apache Spark, Kafka, Hadoop, Hive, Airflow, Presto, HDFS. Strong design experience in High-Level Design (HLD) and Low-Level Design (LLD). Backend frameworks like Spring Boot, Google Guice. Cloud data platforms: AWS, GCP, Azure. Familiarity with data warehousing: Snowflake, Redshift, BigQuery. Databases: Redis, Cassandra, MongoDB, TiDB. DevOps tools: Jenkins, Docker, Kubernetes, Ansible, Chef, Grafana, ELK. Other Skills: Strong understanding of data governance, security, and compliance (GDPR, SOC2, etc.). Proven strategic thinking with ability to align technical architecture to business objectives. Excellent communication, leadership, and stakeholder management. Preferred Qualifications Exposure to Machine Learning infrastructure / MLOps. Experience with real-time data analytics. Strong foundation in algorithms, data structures, and scalable systems. Previous work in SaaS or high-growth startups. Screening Questions Do you have team leadership experience? How many engineers have you led? Have you built a data engineering platform from scratch? Describe the setup. What’s the largest data scale you’ve worked with and where? Are you open to continuing hands-on coding in this role? Interested candidates applies on deepak.visko@gmail.com or 9238142824 . Job Types: Full-time, Permanent Pay: ₹8,300,000.00 - ₹10,400,000.00 per year Work Location: In person
Posted 4 days ago
7.0 years
0 Lacs
Agra, Uttar Pradesh, India
Remote
Experience : 7.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Hybrid (Chennai) Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Agile, Program Management, data infrastructure Forbes Advisor is Looking for: Program Manager – Data Job Description Forbes Advisor is a high-growth digital media and technology company that empowers consumers to make confident decisions about money, health, careers, and everyday life. Our global data organisation builds modern, AI-augmented pipelines that turn information into revenue-driving insight. Job Description: We’re hiring a Program Manager to orchestrate complex, cross-functional data initiatives—from revenue-pipeline automation to analytics product launches. You’ll be the connective tissue between Data Engineering, Analytics, RevOps, Product, and external partners, ensuring programs land on time, on scope, and with measurable impact. If you excel at turning vision into executable roadmaps, mitigating risk before it bites, and communicating clearly across technical and business audiences, we’d love to meet you. Key Responsibilities: Own program delivery for multi-team data products (e.g., revenue-data pipelines, attribution models, partner-facing reporting APIs). Build and maintain integrated roadmaps, aligning sprint plans, funding, and resource commitments. Drive agile ceremonies (backlog grooming, sprint planning, retrospectives) and track velocity, burn-down, and cycle-time metrics. Create transparent status reporting—risks, dependencies, OKRs—tailored for engineers up to C-suite stakeholders. Proactively remove blockers by coordinating with Platform, IT, Legal/Compliance, and external vendors. Champion process optimisation: intake, prioritisation, change management, and post-mortems. Partner with RevOps and Media teams to ensure program outputs translate into revenue growth and faster decision making. Facilitate launch readiness—QA checklists, enablement materials, go-live runbooks—so new data products land smoothly. Foster a culture of documentation, psychological safety, and continuous improvement within the data organisation. Experience required: 7+ years program or project-management experience in data, analytics, SaaS, or high-growth tech. Proven success delivering complex, multi-stakeholder initiatives on aggressive timelines. Expertise with agile frameworks (Scrum/Kanban) and modern collaboration tools (Jira, Asana, Notion/Confluence, Slack). Strong understanding of data & cloud concepts (pipelines, ETL/ELT, BigQuery, dbt, Airflow/Composer). Excellent written and verbal communication—able to translate between technical teams and business leaders. Risk-management mindset: identify, quantify, and drive mitigation before issues escalate. Experience coordinating across time zones and cultures in a remote-first environment. Nice to Have Formal certification (PMP, PMI-ACP, CSM, SAFe, or equivalent). Familiarity with GCP services, Looker/Tableau, or marketing-data stacks (Google Ads, Meta, GA4). Exposure to revenue operations, performance marketing, or subscription/affiliate business models. Background in change-management or process-improvement methodologies (Lean, Six Sigma). Perks: Monthly long weekends—every third Friday off. Fitness and commute reimbursement. Remote-first culture with flexible hours and a high-trust environment. Opportunity to shape a world-class data platform inside a trusted global brand. Collaborate with talented engineers, analysts, and product leaders who value innovation and impact. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 4 days ago
7.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
Experience : 7.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Hybrid (Chennai) Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Agile, Program Management, data infrastructure Forbes Advisor is Looking for: Program Manager – Data Job Description Forbes Advisor is a high-growth digital media and technology company that empowers consumers to make confident decisions about money, health, careers, and everyday life. Our global data organisation builds modern, AI-augmented pipelines that turn information into revenue-driving insight. Job Description: We’re hiring a Program Manager to orchestrate complex, cross-functional data initiatives—from revenue-pipeline automation to analytics product launches. You’ll be the connective tissue between Data Engineering, Analytics, RevOps, Product, and external partners, ensuring programs land on time, on scope, and with measurable impact. If you excel at turning vision into executable roadmaps, mitigating risk before it bites, and communicating clearly across technical and business audiences, we’d love to meet you. Key Responsibilities: Own program delivery for multi-team data products (e.g., revenue-data pipelines, attribution models, partner-facing reporting APIs). Build and maintain integrated roadmaps, aligning sprint plans, funding, and resource commitments. Drive agile ceremonies (backlog grooming, sprint planning, retrospectives) and track velocity, burn-down, and cycle-time metrics. Create transparent status reporting—risks, dependencies, OKRs—tailored for engineers up to C-suite stakeholders. Proactively remove blockers by coordinating with Platform, IT, Legal/Compliance, and external vendors. Champion process optimisation: intake, prioritisation, change management, and post-mortems. Partner with RevOps and Media teams to ensure program outputs translate into revenue growth and faster decision making. Facilitate launch readiness—QA checklists, enablement materials, go-live runbooks—so new data products land smoothly. Foster a culture of documentation, psychological safety, and continuous improvement within the data organisation. Experience required: 7+ years program or project-management experience in data, analytics, SaaS, or high-growth tech. Proven success delivering complex, multi-stakeholder initiatives on aggressive timelines. Expertise with agile frameworks (Scrum/Kanban) and modern collaboration tools (Jira, Asana, Notion/Confluence, Slack). Strong understanding of data & cloud concepts (pipelines, ETL/ELT, BigQuery, dbt, Airflow/Composer). Excellent written and verbal communication—able to translate between technical teams and business leaders. Risk-management mindset: identify, quantify, and drive mitigation before issues escalate. Experience coordinating across time zones and cultures in a remote-first environment. Nice to Have Formal certification (PMP, PMI-ACP, CSM, SAFe, or equivalent). Familiarity with GCP services, Looker/Tableau, or marketing-data stacks (Google Ads, Meta, GA4). Exposure to revenue operations, performance marketing, or subscription/affiliate business models. Background in change-management or process-improvement methodologies (Lean, Six Sigma). Perks: Monthly long weekends—every third Friday off. Fitness and commute reimbursement. Remote-first culture with flexible hours and a high-trust environment. Opportunity to shape a world-class data platform inside a trusted global brand. Collaborate with talented engineers, analysts, and product leaders who value innovation and impact. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 4 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough