Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Role Overview We are looking for highly skilled with 4 to 5 years experienced Generative AI Engineer to design and deploy enterprise-grade GenAI systems. This role blends platform architecture, LLM integration, and operationalization—ideal for engineers with strong hands-on experience in large language models, RAG pipelines, and AI orchestration. Responsibilities Platform Leadership: Architect GenAI platforms powering copilots, document AI, multi-agent systems, and RAG pipelines. LLM Expertise: Build/fine-tune GPT, Claude, Gemini, LLaMA 2/3, Mistral; deep in RLHF, transformer internals, and multi-modal integration. RAG Systems: Develop scalable pipelines with embeddings, hybrid retrieval, prompt orchestration, and vector DBs (Pinecone, FAISS, pgvector). Orchestration & Hosting: Lead LLM hosting, LangChain/LangGraph/AutoGen orchestration, AWS SageMaker/Bedrock integration. Responsible AI: Implement guardrails for PII redaction, moderation, lineage, and access aligned with enterprise security standards. LLMOps/MLOps: Deploy CI/CD pipelines, automate tuning/rollout, handle drift, rollback, and incidents with KPI dashboards. Cost Optimization: Reduce TCO via dynamic routing, GPU autoscaling, context compression, and chargeback tooling. Agentic AI: Build autonomous, critic-supervised agents using MCP, A2A, LGPL patterns. Evaluation: Use LangSmith, BLEU, ROUGE, BERTScore, HIL to track hallucination, toxicity, latency, and sustainability. Skills Required 4–5 years in AI/ML (2+ in GenAI) Strong Python, PySpark, Scala; APIs via FastAPI, GraphQL, gRPC Proficiency with MLflow, Kubeflow, Airflow, Prompt flow Experience with LLMs, vector DBs, prompt engineering, MLOps Solid foundation in applied mathematics & statistics Nice to Have Open-source contributions, AI publications Hands-on with cloud-native GenAI deployment Deep interest in ethical AI and AI safety 2 Days WFO Mandatory Don't meet every job requirement? That's okay! Our company is dedicated to building a diverse, inclusive, and authentic workplace. If you're excited about this role, but your experience doesn't perfectly fit every qualification, we encourage you to apply anyway. You may be just the right person for this role or others.
Posted 1 week ago
10.0 years
15 - 20 Lacs
Jaipur, Rajasthan, India
On-site
We are seeking a cross-functional expert at the intersection of Product, Engineering, and Machine Learning to lead and build cutting-edge AI systems. This role combines the strategic vision of a Product Manager with the technical expertise of a Machine Learning Engineer and the innovation mindset of a Generative AI and LLM expert. You will help define, design, and deploy AI-powered features , train and fine-tune models (including LLMs), and architect intelligent AI agents that solve real-world problems at scale. 🎯 Key Responsibilities 🧩 Product Management: Define product vision, roadmap, and AI use cases aligned with business goals. Collaborate with cross-functional teams (engineering, research, design, business) to deliver AI-driven features. Translate ambiguous problem statements into clear, prioritized product requirements. ⚙️ AI/ML Engineering & Model Development Develop, fine-tune, and optimize ML models, including LLMs (GPT, Claude, Mistral, etc.). Build pipelines for data preprocessing, model training, evaluation, and deployment. Implement scalable ML solutions using frameworks like PyTorch , TensorFlow , Hugging Face , LangChain , etc. Contribute to R&D for cutting-edge models in GenAI (text, vision, code, multimodal). 🤖 AI Agents & LLM Tooling Design and implement autonomous or semi-autonomous AI Agents using tools like AutoGen , LangGraph , CrewAI , etc. Integrate external APIs, vector databases (e.g., Pinecone, Weaviate, ChromaDB), and retrieval-augmented generation (RAG). Continuously monitor, test, and improve LLM behavior, safety, and output quality. 📊 Data Science & Analytics Explore and analyze large datasets to generate insights and inform model development. Conduct A/B testing, model evaluation (e.g., F1, BLEU, perplexity), and error analysis. Work with structured, unstructured, and multimodal data (text, audio, image, etc.). 🧰 Preferred Tech Stack / Tools Languages: Python, SQL, optionally Rust or TypeScript Frameworks: PyTorch, Hugging Face Transformers, LangChain, Ray, FastAPI Platforms: AWS, Azure, GCP, Vertex AI, Sagemaker ML Ops: MLflow, Weights & Biases, DVC, Kubeflow Data: Pandas, NumPy, Spark, Airflow, Databricks Vector DBs: Pinecone, Weaviate, FAISS Model APIs: OpenAI, Anthropic, Google Gemini, Cohere, Mistral Tools: Git, Docker, Kubernetes, REST, GraphQL 🧑💼 Qualifications Bachelor’s, Master’s, or PhD in Computer Science, Data Science, Machine Learning, or a related field. 10+ years of experience in core ML, AI, or Data Science roles. Proven experience building and shipping AI/ML products. Deep understanding of LLM architectures, transformers, embeddings, prompt engineering, and evaluation. Strong product thinking and ability to work closely with both technical and non-technical stakeholders. Familiarity with GenAI safety, explainability, hallucination reduction, and prompt testing, computer vision 🌟 Bonus Skills Experience with autonomous agents and multi-agent orchestration. Open-source contributions to ML/AI projects. Prior Startup Or High-growth Tech Company Experience. Knowledge of reinforcement learning, diffusion models, or multimodal AI. Skills: text,claude,vision,hugging face transformers,sagemaker,hallucination reduction,langchain,genai safety,machine learning,data science & analytics,transformers,crewai,gcp,open-source contributions to ml/ai projects,startup,chromadb,graphql,pipelines,diffusion models,llm architectures,prompt engineering,gpt,weaviate,cohere,structured, unstructured, and multimodal data,docker,autogen,ai/ml products,model development,git,ai use,a/b testing,core ml,code,ai/ml engineering & model development,vertex ai,architect intelligent ai agents,tensorflow,bleu,ai-driven features,error analysis,roadmap,typescript,retrieval-augmented generation (rag),model training,multimodal ai,weights & biases,image,generative ai,hugging face,ray,f1,explore and analyze large datasets,spark,kubernetes,data science,product management,autonomous agents,mlflow,multimodal,ai,rest,google gemini,model evaluation,computer vision,mistral,vector databases,sql,engineering,airflow,output quality,pinecone,langgraph,reinforcement learning,pandas,llms,rust,ai-powered features,fastapi,multi-agent orchestration,embeddings,python,aws,ml models,kubeflow,pytorch,azure,dvc,openai,faiss,databricks,audio,ai engineering,numpy,anthropic,define product vision
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Immediate #HIRING for a highly motivated and experienced GCP Data Engineer to join our growing team. We’re a leading software company specializing in Artificial Intelligence, Machine Learning, Data Analytics, Innovative data solutions, Cloud-based technologies If you're passionate about building robust applications and thrive in a dynamic environment, please share your resume a t rizwana@randomtrees.com Job Title: GCP Data Engineer Experience : 4 Yrs - 8Yrs Notice: Immediate Location: Hyderabad/ Chennai - Hybrid Mode Job Type: Full-time Employment Job Description: We are looking for an experienced GCP Data Engineer to design, develop, and optimize data pipelines and solutions on Google Cloud Platform (GCP) . The ideal candidate should have hands-on experience with BigQuery, DataFlow, PySpark, GCS, and Airflow (Cloud Composer) , along with strong expertise or knowledge in DBT. Key Responsibilities: Design and develop scalable ETL/ELT data pipelines using DataFlow (Apache Beam), PySpark, and Airflow (Cloud Composer) . Work extensively with BigQuery for data transformation, storage, and analytics. Implement data ingestion, processing, and transformation workflows using GCP-native services. Optimize and troubleshoot performance issues in BigQuery and DataFlow pipelines. Manage data storage and governance using Google Cloud Storage (GCS) and other GCP services. Ensure data quality, security, and compliance with industry standards. Work closely with data scientists, analysts, and business teams to provide data solutions. Automate workflows, monitor jobs, and improve pipeline efficiency. Required Skills: ✔ Google Cloud Platform (GCP) Data Engineering (GCP DE Certification preferred) DBT knowledge or experience is mandate ✔ BigQuery – Data modeling, query optimization, and performance tuning ✔ PySpark – Data processing and transformation ✔ GCS (Google Cloud Storage) – Data storage and management ✔ Airflow / Cloud Composer – Workflow orchestration and scheduling ✔ SQL & Python – Strong hands-on experience ✔ Experience with CI/CD pipelines, Terraform, or Infrastructure as Code (IaC) is a plus.
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
Noida, Pune, Gurugram
Hybrid
IRIS Software Prominent IT Company is looking for Senior AWS Data Engineer. Please find below Job description and share me your updated resume at Prateek.gautam@irissoftware.com. Role: Senior AWS Data Engineer Location: Pune / Noida / Gurgaon Hybrid : 3 days office , 2 days work from home Job Description: 6 to 10 years of experience in Overall years of experience. Good experience in Data engineering is required. Good experience in AWS, SQL, AWS Glue, PySpark, Airflow, CDK, Redshift is required. Good communications skills is required. About Iris Software Inc. With 4,000+ associates and offices in India, U.S.A. and Canada, Iris Software delivers technology services and solutions that help clients complete fast, far-reaching digital transformations and achieve their business goals. A strategic partner to Fortune 500 and other top companies in financial services and many other industries, Iris provides a value-driven approach - a unique blend of highly-skilled specialists, software engineering expertise, cutting-edge technology, and flexible engagement models. High customer satisfaction has translated into long-standing relationships and preferred-partner status with many of our clients, who rely on our 30+ years of technical and domain expertise to future-proof their enterprises. Associates of Iris work on mission-critical applications supported by a workplace culture that has won numerous awards in the last few years, including Certified Great Place to Work in India; Top 25 GPW in IT & IT-BPM; Ambition Box Best Place to Work, #3 in IT/ITES; and Top Workplace NJ-USA.
Posted 1 week ago
5.0 - 10.0 years
25 - 40 Lacs
Gurugram
Work from Office
Job Title: Data Engineer Job Type: Full-time Department: Data Engineering / Data Science Reports To: Data Engineering Manager / Chief Data Officer About the Role: We are looking for a talented Data Engineer to join our team. As a Data Engineer, you will be responsible for designing, building, and maintaining robust data pipelines and systems that process and store large volumes of data. You will collaborate closely with data scientists, analysts, and business stakeholders to deliver high-quality, actionable data solutions. This role requires a strong background in data engineering, database technologies, and cloud platforms, along with the ability to work in an Agile environment to drive data initiatives forward. Responsibilities: Design, build, and maintain scalable and efficient data pipelines that move, transform, and store large datasets. Develop and optimize ETL processes using tools such as Apache Spark , Apache Kafka , or AWS Glue . Work with SQL and NoSQL databases to ensure the availability, consistency, and reliability of data. Collaborate with data scientists and analysts to ensure data requirements and quality standards are met. Design and implement data models, schemas, and architectures for data lakes and data warehouses. Automate manual data processes to improve efficiency and data processing speed. Ensure data security, privacy, and compliance with industry standards and regulations. Continuously evaluate and integrate new tools and technologies to enhance data engineering processes. Troubleshoot and resolve data quality and performance issues. Participate in code reviews and contribute to a culture of best practices in data engineering. Requirements: 3-10 years of experience as a Data Engineer or in a similar role. Strong proficiency in SQL and experience with NoSQL databases (e.g., MongoDB, Cassandra). Experience with big data technologies such as Apache Hadoop , Spark , Hive , and Kafka . Hands-on experience with cloud platforms like AWS , Azure , or Google Cloud . Proficiency in Python , Java , or Scala for data processing and scripting. Familiarity with data warehousing concepts, tools, and technologies (e.g., Snowflake , Redshift , BigQuery ). Experience working with data modeling, data lakes, and data pipelines. Solid understanding of data governance, data privacy, and security best practices. Strong problem-solving and debugging skills. Ability to work in an Agile development environment. Excellent communication skills and the ability to work cross-functionally.
Posted 1 week ago
175.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. How will you make an impact in this role? Expertise with handling large volumes of data coming from many different disparate systems Expertise with Core Java , multithreading , backend processing , transforming large data volumes Working knowledge of Apache Flink , Apache Airflow , Apache Beam, open source data processing platforms Working knowledge of cloud platforms like GCP. Working knowledge of databases and performance tuning for complex big data scenarios - Singlestore DB and In Memory Processing Cloud Deployments , CI/CD and Platform Resiliency Good experience with Mvel Excellent communication skills , collaboration mindset and ability to work through unknowns Work with key stakeholders to drive data solutions that align to strategic roadmaps, prioritized initiatives and strategic Technology directions. Own accountability for all quality aspects and metrics of product portfolio, including system performance, platform availability, operational efficiency, risk management, information security, data management and cost effectiveness. Minimum Qualifications: Bachelor’s degree in computer science, Computer Science Engineering, or related field is required. 3+ years of large-scale technology engineering and formal management in a complex environment and/or comparable experience. To be successful in this role you will need to be good in Java, Flink, SQ, KafkaL & GCP Successful engineering and deployment of enterprise-grade technology products in an Agile environment. Large scale software product engineering experience with contemporary tools and delivery methods (i.e. DevOps, CD/CI, Agile, etc.). 3+ years' experience in a hands-on engineering in Java and data/distributed eco-system. Ability to see the big picture with attention given to critical details. Preferred Qualifications: Knowledge on Kafka, Spark Finance domain knowledge We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.
Posted 1 week ago
6.0 - 8.0 years
10 - 15 Lacs
Hyderabad, Gurugram, Bengaluru
Work from Office
Application Integration Engineer Experience Level (6-8 years) Skill Python, AWS S3, AWS MWAA Airflow, Confluent Kafka, API Development ? Experienced Python developer with very good experience with Confluent Kafka and Airflow. Have API development experience using Python. Have good experience with AWS cloud services. Very good experience with DevOps process CI/CD tools like Git, Jenkins, AWS ECR/ECS, AWS EKS etc. ? Requirements analysis of FR / NFR and prepares technical design based on requirements ? Builds code based on the technical design ? Can independently resolve technical issues and also help other team members with technical issue resolution. ? Helps with testing and efficiently fixes bugs. ?Follows the DevOps CI/CD processes and change management processes for any code deployment
Posted 1 week ago
4.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Analyst, Inclusive Innovation & Analytics, Center for Inclusive Growth Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. The Center for Inclusive Growth is the social impact hub at Mastercard. The organization seeks to ensure that the benefits of an expanding economy accrue to all segments of society. Through actionable research, impact data science, programmatic grants, stakeholder engagement and global partnerships, the Center advances equitable and sustainable economic growth and financial inclusion around the world. The Center’s work is at the heart of Mastercard’s objective to be a force for good in the world. Reporting to Vice President, Inclusive Innovation & Analytics, the Analyst, will 1) create and/or scale data, data science, and AI solutions, methodologies, products, and tools to advance inclusive growth and the field of impact data science, 2) work on the execution and implementation of key priorities to advance external and internal data for social strategies, and 3) manage the operations to ensure operational excellence across the Inclusive Innovation & Analytics team. Key Responsibilities Data Analysis & Insight Generation Design, develop, and scale data science and AI solutions, tools, and methodologies to support inclusive growth and impact data science. Analyze structured and unstructured datasets to uncover trends, patterns, and actionable insights related to economic inclusion, public policy, and social equity. Translate analytical findings into insights through compelling visualizations and dashboards that inform policy, program design, and strategic decision-making. Create dashboards, reports, and visualizations that communicate findings to both technical and non-technical audiences. Provide data-driven support for convenings involving philanthropy, government, private sector, and civil society partners. Data Integration & Operationalization Assist in building and maintaining data pipelines for ingesting and processing diverse data sources (e.g., open data, text, survey data). Ensure data quality, consistency, and compliance with privacy and ethical standards. Collaborate with data engineers and AI developers to support backend infrastructure and model deployment. Team Operations Manage team operations, meeting agendas, project management, and strategic follow-ups to ensure alignment with organizational goals. Lead internal reporting processes, including the preparation of dashboards, performance metrics, and impact reports. Support team budgeting, financial tracking, and process optimization. Support grantees and grants management as needed Develop briefs, talking points, and presentation materials for leadership and external engagements. Translate strategic objectives into actionable data initiatives and track progress against milestones. Coordinate key activities and priorities in the portfolio, working across teams at the Center and the business as applicable to facilitate collaboration and information sharing Support the revamp of the Measurement, Evaluation, and Learning frameworks and workstreams at the Center Provide administrative support as needed Manage ad-hoc projects, events organization Qualifications Bachelor’s degree in Data Science, Statistics, Computer Science, Public Policy, or a related field. 2–4 years of experience in data analysis, preferably in a mission-driven or interdisciplinary setting. Strong proficiency in Python and SQL; experience with data visualization tools (e.g., Tableau, Power BI, Looker, Plotly, Seaborn, D3.js). Familiarity with unstructured data processing and robust machine learning concepts. Excellent communication skills and ability to work across technical and non-technical teams. Technical Skills & Tools Data Wrangling & Processing Data cleaning, transformation, and normalization techniques Pandas, NumPy, Dask, Polars Regular expressions, JSON/XML parsing, web scraping (e.g., BeautifulSoup, Scrapy) Machine Learning & Modeling Scikit-learn, XGBoost, LightGBM Proficiency in supervised/unsupervised learning, clustering, classification, regression Familiarity with LLM workflows and tools like Hugging Face Transformers, LangChain (a plus) Visualization & Reporting Power BI, Tableau, Looker Python libraries: Matplotlib, Seaborn, Plotly, Altair Dashboarding tools: Streamlit, Dash Storytelling with data and stakeholder-ready reporting Cloud & Collaboration Tools Google Cloud Platform (BigQuery, Vertex AI), Microsoft Azure Git/GitHub, Jupyter Notebooks, VS Code Experience with APIs and data integration tools (e.g., Airflow, dbt) Ideal Candidate You are a curious and collaborative analyst who believes in the power of data to drive social change. You’re excited to work with cutting-edge tools while staying grounded in the real-world needs of communities and stakeholders. Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.
Posted 1 week ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Senior Software Engineer Overview We are the global technology company behind the world’s fastest payments processing network. We are a vehicle for commerce, a connection to financial systems for the previously excluded, a technology innovation lab, and the home of Priceless®. We ensure every employee has the opportunity to be a part of something bigger and to change lives. We believe as our company grows, so should you. We believe in connecting everyone to endless, priceless possibilities. Our Team Within Mastercard – Data & Services The Data & Services team is a key differentiator for Mastercard, providing the cutting-edge services that are used by some of the world's largest organizations to make multi-million dollar decisions and grow their businesses. Focused on thinking big and scaling fast around the globe, this agile team is responsible for end-to-end solutions for a diverse global customer base. Centered on data-driven technologies and innovation, these services include payments-focused consulting, loyalty and marketing programs, business Test & Learn experimentation, and data-driven information and risk management services. Data Analytics And AI Solutions (DAAI) Program Within the D&S Technology Team, the DAAI program is a relatively new program that is comprised of a rich set of products that provide accurate perspectives on Portfolio Optimization, and Ad Insights. Currently, we are enhancing our customer experience with new user interfaces, moving to API and web application-based data publishing to allow for seamless integration in other Mastercard products and externally, utilizing new data sets and algorithms to further analytic capabilities, and generating scalable big data processes. We are looking for an innovative software engineering lead who will lead the technical design and development of an Analytic Foundation. The Analytic Foundation is a suite of individually commercialized analytical capabilities (think prediction as a service, matching as a service or forecasting as a service) that also includes a comprehensive data platform. These services will be offered through a series of APIs that deliver data and insights from various points along a central data store. This individual will partner closely with other areas of the business to build and enhance solutions that drive value for our customers. Engineers work in small, flexible teams. Every team member contributes to designing, building, and testing features. The range of work you will encounter varies from building intuitive, responsive UIs to designing backend data models, architecting data flows, and beyond. There are no rigid organizational structures, and each team uses processes that work best for its members and projects. Here are a few examples of products in our space: Portfolio Optimizer (PO) is a solution that leverages Mastercard’s data assets and analytics to allow issuers to identify and increase revenue opportunities within their credit and debit portfolios. Ad Insights uses anonymized and aggregated transaction insights to offer targeting segments that have high likelihood to make purchases within a category to allow for more effective campaign planning and activation. Help found a new, fast-growing engineering team! Position Responsibilities As a Senior Software Engineer, you will: Play a large role in scoping, design and implementation of complex features Push the boundaries of analytics and powerful, scalable applications Design and implement intuitive, responsive UIs that allow issuers to better understand data and analytics Build and maintain analytics and data models to enable performant and scalable products Ensure a high-quality code base by writing and reviewing performant, well-tested code Mentor junior software engineers and teammates Drive innovative improvements to team development processes Partner with Product Managers and Customer Experience Designers to develop a deep understanding of users and use cases and apply that knowledge to scoping and building new modules and features Collaborate across teams with exceptional peers who are passionate about what they do Ideal Candidate Qualifications 5+ years of full stack engineering experience in an agile production environment Experience leading the design and implementation of large, complex features in full-stack applications Ability to easily move between business, data management, and technical teams; ability to quickly intuit the business use case and identify technical solutions to enable it Experience coaching and mentoring junior teammates Experience leading a large technical effort that spans multiple people and teams Proficiency with Java/Spring Boot, .NET/C#, SQL Server or other object-oriented languages, front-end frameworks, and/or relational database technologies Some proficiency in using Python or Scala, Spark, Hadoop platforms & tools (Hive, Impala, Airflow, NiFi, Scoop), SQL to build Big Data products & platforms Some experience in building and deploying production-level data-driven applications and data processing workflows/pipelines and/or implementing machine learning systems at scale in Java, Scala, or Python Strong technologist with proven track record of learning new technologies and frameworks Customer-centric development approach Passion for analytical / quantitative problem solving Experience identifying and implementing technical improvements to development processes Collaboration skills with experience working with people across roles and geographies Motivation, creativity, self-direction, and desire to thrive on small project teams Superior academic record with a degree in Computer Science or related technical field Strong written and verbal English communication skills Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Incedo is Hiring Data Engineer-GCP : Immediate to 30 days Joiners Preferred! 🚀 Are you passionate about GCP Data Engineers and looking for an exciting opportunity to work on cutting-edge projects? We're looking for a GCP Data Engineer to join our team in Chennai and Hyderabad! Skills Required: Experience: 3 to 5 years Experience with GCP , Python , Airflow , Pyspark. Location - Chennai/Hyderabad (WFO) If you are interested please drop your resume at anshika.arora@incedoinc.com
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Incedo is Hiring Data Engineer-GCP : Immediate to 30 days Joiners Preferred! 🚀 Are you passionate about GCP Data Engineers and looking for an exciting opportunity to work on cutting-edge projects? We're looking for a GCP Data Engineer to join our team in Hyderabad! Walkin Drive at Hyderabad on 2nd Aug. Kindly contact me on my email for more details and getting an invite. Skills Required: Experience: 3 to 5 years Experience with GCP , Python , Airflow , Pyspark. Location - Chennai/Hyderabad (WFO) If you are interested please drop your resume at anshika.arora@incedoinc.com #walkindrivehyderabad, #walkin , #gcpdataengineer, #gcp , #dataengineer
Posted 1 week ago
4.0 - 6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description 4-6 years of good hands on exposure with Big Data technologies – pySpark (Data frame and SparkSQL), Hadoop, and Hive Good hands on experience of python and Bash Scripts Good understanding of SQL and data warehouse concepts Strong analytical, problem-solving, data analysis and research skills Demonstrable ability to think outside of the box and not be dependent on readily available tools Excellent communication, presentation and interpersonal skills are a must Good to have: Hands-on experience with using Cloud Platform provided Big Data technologies (i.e. IAM, Glue, EMR, RedShift, S3, Kinesis) Orchestration with Airflow and Any job scheduler experience Experience in migrating workload from on-premise to cloud and cloud to cloud migrations Roles & Responsibilities Develop efficient ETL pipelines as per business requirements, following the development standards and best practices. Perform integration testing of different created pipeline in AWS env. Provide estimates for development, testing & deployments on different env. Participate in code peer reviews to ensure our applications comply with best practices. Create cost effective AWS pipeline with required AWS services i.e S3,IAM, Glue, EMR, Redshift etc.
Posted 1 week ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: GCP Data Engineer Location: Chennai 34350 Type: Contract Budget: Up to ₹18 LPA Notice Period: Immediate joiners preferred 🧾 Job Description We are seeking an experienced Google Cloud Platform (GCP) Data Engineer to join our team in Chennai. This role is centered on designing and building cloud-based data solutions that support AI/ML, analytics, and business intelligence use cases. You will develop scalable and high-performance pipelines, integrate and transform data from various sources, and support both real-time and batch data needs. 🛠️ Key Responsibilities Design and implement scalable batch and real-time data pipelines using GCP services such as BigQuery, Dataflow, Dataform, Cloud Composer (Airflow), Data Fusion, Dataproc, Cloud SQL, Compute Engine, and others. Build data products that combine historical and live data for business insights and analytical applications. Lead efforts in data transformation, ingestion, integration, data mart creation, and activation of data assets. Collaborate with cross-functional teams including AI/ML, analytics, DevOps, and product teams to deliver robust cloud-native solutions. Optimize pipelines for performance, reliability, and cost-effectiveness. Contribute to data governance, quality assurance, and security best practices. Drive innovation by integrating AI/ML features, maintaining strong documentation, and applying continuous improvement strategies. Provide production support, troubleshoot failures, and meet SLAs using GCP’s monitoring tools. Work within an Agile environment, follow CI/CD practices, and apply test-driven development (TDD). ✅ Skills Required Strong experience in: BigQuery, Dataflow, Dataform, Data Fusion, Cloud SQL, Compute Engine, Dataproc, Airflow (Cloud Composer), Cloud Functions, Cloud Run Programming experience with Python, Java, PySpark, or Apache Beam Proficient in SQL (5+ years) for complex data handling Hands-on with Terraform, Tekton, Cloud Build, GitHub, Docker Familiarity with Apache Kafka, Pub/Sub, Kubernetes GCP Certified (Associate or Professional Data Engineer) ⭐ Skills Preferred Deep knowledge of cloud architecture and infrastructure-as-code tools Experience in data security, regulatory compliance, and data governance Experience with AI/ML solutions or platforms Understanding of DevOps pipelines, CI/CD using Cloud Build, and containerization Exposure to financial services data or similar regulated environments Experience in mentoring and leading engineering teams Tools: JIRA, Artifact Registry, App Engine 🎓 Education Required: Bachelor's Degree (in Computer Science, Engineering, or related field) Preferred: Master’s Degree 📌 Additional Details Role Type: Contract-based Work Location: Chennai, Onsite Target Candidates: Mid to Senior level with minimum 5+ years of data engineering experience Skills: gcp,apache,pyspark,data,docker
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
As a Data Engineering Lead, you will collaborate with marketing, analytics, and business teams to understand data requirements and develop data solutions that address critical business inquiries. Your responsibilities will include leading the implementation and strategic optimization of tag management solutions such as Tealium and Google Tag Manager (GTM) to ensure precise and comprehensive data capture. You will leverage your expertise in Google Analytics 4 (GA4) to configure and customize data collection processes for enhanced insights. Additionally, you will architect scalable and performant data models on Google Cloud, utilizing BigQuery for data warehousing and analysis purposes. In this role, you will proficiently use SQL and scripting languages like JavaScript and HTML for data extraction, manipulation, and visualization. You will also play a pivotal role in mentoring and guiding a team of engineers, fostering a culture of collaboration and continuous improvement. Staying updated on the latest trends and technologies in data engineering and analytics, you will bring innovative ideas to the table and drive the deliverables by mentoring team members effectively. To qualify for this position, you must have experience with Tealium and tag management tools, along with a proven ability to use communication effectively to build positive relationships and drive project success. Your expertise in tag management solutions such as Tealium and GTM will be crucial for comprehensive website and app data tracking, including the implementation of scripting languages for Tag Extensions. Proficiency in Tealium concepts like IQ Tag Management, Audience Stream, Event Stream API Hub, Customer Data Hub, and Debugging tools is essential. Experience in utilizing Google Analytics 4 (GA4) for advanced data collection and analysis, as well as knowledge of Google Cloud, particularly Google BigQuery for data warehousing and analysis, will be advantageous. Preferred qualifications for this role include experience in a similar industry (e.g., retail, e-commerce, digital marketing), proficiency with Python/PySpark for data processing and analysis, working knowledge of Snowflake for data warehousing, experience with Airflow or similar workflow orchestration tools for managing data pipelines, and familiarity with AWS Cloud Technology. Additionally, skills in frontend technologies like React, JavaScript, and HTML, coupled with Python expertise for backend development, will be beneficial. Overall, as a Data Engineering Lead, you will play a critical role in designing robust data pipelines and architectures that support data-driven decision-making for websites and mobile applications, ensuring seamless data orchestration and processing through best-in-class ETL tools and technologies. Your expertise in Tealium, Google Analytics 4, and SQL will be instrumental in driving the success of data engineering initiatives within the organization.,
Posted 1 week ago
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Develop comprehensive digital analytics solutions utilizing Adobe Analytics for web tracking, measurement, and insight generation Design, manage, and optimize interactive dashboards and reports using Power BI to support business decision-making Lead the design, development, and maintenance of robust ETL/ELT pipelines integrating diverse data sources Architect scalable data solutions leveraging Python for automation, scripting, and engineering tasks Oversee workflow orchestration using Apache Airflow to ensure timely and reliable data processing Provide leadership and develop robust forecasting models to support sales and marketing strategies Develop advanced SQL queries for data extraction, manipulation, analysis, and database management Implement best practices in data modeling and transformation using Snowflake and DBT; exposure to Cosmos DB is a plus Ensure code quality through version control best practices using GitHub Collaborate with cross-functional teams to understand business requirements and translate them into actionable analytics solutions Stay updated with the latest trends in digital analytics; familiarity or hands-on experience with Adobe Experience Platform (AEP) / Customer Journey Analytics (CJO) is highly desirable Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Master’s or Bachelor’s degree in Computer Science, Information Systems, Engineering, Mathematics, Statistics, Business Analytics, or a related field 8+ years of progressive experience in digital analytics, data analytics or business intelligence roles Experience with data modeling and transformation using tools such as DBT and Snowflake; familiarity with Cosmos DB is a plus Experience developing forecasting models and conducting predictive analytics to drive business strategy Advanced proficiency in web and digital analytics platforms (Adobe Analytics) Proficiency in ETL/ELT pipeline development and workflow orchestration (Apache Airflow) Skilled in creating interactive dashboards and reports using Power BI or similar BI tools Deep understanding of digital marketing metrics, KPIs, attribution models, and customer journey analysis Industry certifications relevant to digital analytics or cloud data platforms Ability to deliver clear digital reporting and actionable insights to stakeholders at all organizational levels At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. #NJP
Posted 1 week ago
10.0 - 14.0 years
0 Lacs
pune, maharashtra
On-site
About the Team As a part of the DoorDash organization, you will be joining a data-driven team that values timely, accurate, and reliable data to make informed business and product decisions. Data serves as the foundation of DoorDash's success, and the Data Engineering team is responsible for building database solutions tailored to various use cases such as reporting, product analytics, marketing optimization, and financial reporting. By implementing robust data structures and data warehouse architecture, this team plays a crucial role in facilitating decision-making processes at DoorDash. Additionally, the team focuses on enhancing the developer experience by developing tools that support the organization's high-velocity demands. About the Role DoorDash is seeking a dedicated Data Engineering Manager to lead the development of enterprise-scale data solutions. In this role, you will serve as a technical expert on all aspects of data architecture, empowering data engineers, data scientists, and DoorDash partners. Your responsibilities will include fostering a culture of engineering excellence, enabling engineers to deliver reliable and flexible solutions at scale. Furthermore, you will be instrumental in building and nurturing a high-performing team, driving innovation and success in a dynamic and fast-paced environment. In this role, you will: - Lead and manage a team of data engineers, focusing on hiring, building, growing, and nurturing impactful business-focused data teams. - Drive the technical and strategic vision for embedded pods and foundational enablers to meet current and future scalability and interoperability needs. - Strive for continuous improvement of data architecture and development processes. - Balance quick wins with long-term strategy and engineering excellence, breaking down large systems into user-friendly data assets and reusable components. - Collaborate cross-functionally with stakeholders, external partners, and peer data leaders. - Utilize effective planning and execution tools to ensure short-term and long-term team and stakeholder success. - Prioritize reliability and quality as essential components of data solutions. Qualifications: - Bachelor's, Master's, or Ph.D. in Computer Science or equivalent field. - Over 10 years of experience in data engineering, data platform, or related domains. - Minimum of 2 years of hands-on management experience. - Strong communication and leadership skills, with a track record of hiring and growing teams in a fast-paced environment. - Proficiency in programming languages such as Python, Kotlin, and SQL. - Prior experience with technologies like Snowflake, Databricks, Spark, Trino, and Pinot. - Familiarity with the AWS ecosystem and large-scale batch/real-time ETL orchestration using tools like Airflow, Kafka, and Spark Streaming. - Knowledge of data lake file formats including Delta Lake, Apache Iceberg, Glue Catalog, and S3. - Proficiency in system design and experience with AI solutions in the data space. At DoorDash, we are dedicated to fostering a diverse and inclusive community within our company and beyond. We believe that innovation thrives in an environment where individuals from diverse backgrounds, experiences, and perspectives come together. We are committed to providing equal opportunities for all and creating an inclusive workplace where everyone can excel and contribute to our collective success.,
Posted 1 week ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Location: Gurgaon tbo.com Office Address: Floor 22, Tower C, Epitome Building No. 5,DLF Cyber city, DLF phase 2,Gurgaon - 122002, Haryana, India TBO – Travel Boutique Online Group –(www.tbo.com) TBO is a global platform that aims to simplify all buying and selling travel needs of travel partners across the world. The proprietary technology platform aims to simplify the demands of the complex world of global travel by seamlessly connecting the highly distributed travel buyers and travel suppliers at scale. The TBO journey began in 2006 with a simple goal – to address the evolving needs of travel buyers and suppliers, and what started off as a single product air ticketing company, has today become the leading B2A (Business to Agents) travel portal across the Americas, UK & Europe, Africa, Middle East, India, and Asia Pacific. Today, TBO’s product range from air, hotels, rail, holiday packages, car rentals, transfers, sightseeing, cruise, and cargo. Apart from these products, our proprietary platform relies heavily on AI/ML to offer unique listings and products, meeting specific requirements put forth by customers, thus increasing conversions. TBO’s approach has always been technology-first and we continue to invest on new innovations and new offerings to make travel easy and simple. TBO’s travel APIs are serving large travel ecosystems across the world while the modular architecture of the platform enables new travel products while expanding across new geographies. Why TBO: • You will influence & contribute to “Building World Largest Technology Led Travel Distribution Network” for a $ 9 Trillion global travel business market. • We are the emerging leaders in technology led end-to-end travel management, in the B2B space. • Physical Presence in 47 countries with business in 110 countries. • We are reputed for our-long lasting trusted relationships. We stand by our eco system of suppliers and buyers to service the end customer. • An open & informal start-up environment which cares. What TBO offers to a Life Traveller in You: • Enhance Your Leadership Acumen. Join the journey to create global scale and ‘World Best’. • Challenge Yourself to do something path breaking. Be Empowered. The only thing to stop you will be your imagination. • As we enter the last phase of the pandemic; travel space is likely to see significant growth. Witness and shape this space. It will be one exciting journey. We are a tech-driven organization focused on leveraging data, AI, and scalable cloud infrastructure to drive impactful business decisions. We are looking for a highly skilled and experienced Head of Data Science and Engineering with a strong background in machine learning, AI , and big data architecture , ideally from a top-tier engineering Key Responsibilities: Design, develop, and maintain robust, scalable, and high-performance data pipelines and ETL processes. Architect and implement large-scale data infrastructure using tools such as Spark, Kafka, Airflow, and cloud platforms (AWS/GCP/Azure). Deploy machine learning models into production. Optimize data workflows to handle structured and unstructured data across various sources. Develop and maintain metadata management, data quality checks, and observability. Drive best practices in data engineering, data governance, and model monitoring. Mentor junior team members and contribute to strategic technology decisions. Must-Have Qualifications: 10+ years of experience in data engineering/Scientist, data architecture, or a related domain. Strong expertise in Python/Scala/Java and SQL. Proven experience with big data tools (Spark, Hadoop, Hive), streaming systems (Kafka, Flink), and workflow orchestration tools (Airflow, Prefect). Deep understanding of data modeling , data warehousing , and distributed systems . Strong exposure to ML/AI pipelines , MLOps, and model lifecycle management. Experience with cloud platforms such as AWS (S3, Redshift, Glue) , GCP (BigQuery, Dataflow) , or Azure (Data Lake, Synapse) . Graduate/Postgraduate from a premium engineering institute (IITs, NITs, BITS, etc.) . Exposure to Statistical Modeling around pricing and churn management is a plus Exposure to fine-tuning LLMs is a plus
Posted 1 week ago
8.0 years
0 Lacs
India
Remote
Job Title: Data Engineer Location: Remote (4-hour EST overlap, 9 AM – 1 PM EST) Type: Full-time We are seeking experienced Data Engineers to join our engineering team and build data-driven products on cloud-based infrastructure. You will design, develop, and optimize scalable systems to process large volumes of data while leading a small team of developers. Key Responsibilities: Design, build, test, and deploy scalable, reusable data systems. Manage and optimize data and compute environments for efficiency. Lead and mentor a small development team; conduct code reviews. Collaborate with cross-functional teams to integrate data solutions. Stay current with emerging technologies and guide best practices. Required Skills: 8+ years’ experience with Linux, Bash, Python, SQL. 4+ years with Spark and the Hadoop ecosystem. 4+ years using AWS (EMR, Glue, Athena, Redshift). Experience designing/managing data flows and APIs. 4+ years in team management and mentorship. Passion for solving complex, real-world data problems. Preferred Skills: Degree in Computer Science, Engineering, or equivalent. Experience with Python, C++, large-scale data infrastructure, Hive, Airflow, dbt, Airbyte. Strong knowledge of data organization, cataloging, and relational databases. Familiarity with AWS and/or GCP cloud ecosystems.
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Company : They balance innovation with an open, friendly culture and the backing of a long-established parent company, known for its ethical reputation. We guide customers from what’s now to what’s next by unlocking the value of their data and applications to solve their digital challenges, achieving outcomes that benefit both business and society. About Client: Our client is a global digital solutions and technology consulting company headquartered in Mumbai, India. The company generates annual revenue of over $4.29 billion (₹35,517 crore), reflecting a 4.4% year-over-year growth in USD terms. It has a workforce of around 86,000 professionals operating in more than 40 countries and serves a global client base of over 700 organizations. Our client operates across several major industry sectors, including Banking, Financial Services & Insurance (BFSI), Technology, Media & Telecommunications (TMT), Healthcare & Life Sciences, and Manufacturing & Consumer. In the past year, the company achieved a net profit of $553.4 million (₹4,584.6 crore), marking a 1.4% increase from the previous year. It also recorded a strong order inflow of $5.6 billion, up 15.7% year-over-year, highlighting growing demand across its service lines. Key focus areas include Digital Transformation, Enterprise AI, Data & Analytics, and Product Engineering—reflecting its strategic commitment to driving innovation and value for clients across industries. Job Description Python API / FAST API Developer Location : Hyderabad Who are we looking for? We are seeking a Python Developer with strong expertise in Python and Databases & hands-on experience in Azure cloud technologies. The role will focus on migrating processes from the current 3 rd Party RPA modules to Apache Airflow modules, ensuring seamless orchestration and automation of workflows. The ideal candidate will bring technical proficiency, problem-solving skills, and a deep understanding of workflow automation, along with a strong grasp of the North America insurance industry processes . Technical Skills: · Design, develop, and implement workflows using Apache Airflow to replace the current 3 rd Party RPA modules. · Build and optimize Python scripts to enable automation and integration with Apache Airflow pipelines. · Leverage Azure cloud services for deployment, monitoring, and scaling of Airflow · Collaborate with cross-functional teams to understand existing processes, dependencies, and business objectives. · Lead the migration of critical processes such as Auto, Package, Work Order Processing, and Policy Renewals within CI, Major Accounts, and Middle Market LOBs. · Ensure the accuracy, efficiency, and scalability of new workflows post-migration. · Perform unit testing, troubleshooting, and performance tuning for workflows and scripts. · Document workflows, configurations, and technical details to maintain clear and comprehensive project records. · Mentor junior developers and share best practices for Apache Airflow and Python Responsibilities · Proficiency in Python programming for API Development, Scripting, Data transformation, and Process Automation & Database interactions.
Posted 1 week ago
8.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Category: Software Development/ Engineering Main location: India, Karnataka, Bangalore Position ID: J0725-1837 Employment Type: Full Time Position Description: Company Profile: At CGI, we’re a team of builders. We call our employees members because all who join CGI are building their own company - one that has grown to 72,000 professionals located in 40 countries. Founded in 1976, CGI is a leading IT and business process services firm committed to helping clients succeed. We have the global resources, expertise, stability and dedicated professionals needed to achieve. At CGI, we’re a team of builders. We call our employees members because all who join CGI are building their own company - one that has grown to 72,000 professionals located in 40 countries. Founded in 1976, CGI is a leading IT and business process services firm committed to helping clients succeed. We have the global resources, expertise, stability and dedicated professionals needed to achieve results for our clients - and for our members. Come grow with us. Learn more at www.cgi.com. This is a great opportunity to join a winning team. CGI offers a competitive compensation package with opportunities for growth and professional development. Benefits for full-time, permanent members start on the first day of employment and include a paid time-off program and profit participation and stock purchase plans. We wish to thank all applicants for their interest and effort in applying for this position, however, only candidates selected for interviews will be contacted. No unsolicited agency referrals please. Job Title: Lead Data Engineer and Developer Position: Tech Lead Experience:8+ Years Category: Software Development Main location: Hyderabad, Chennai Position ID: J0625-0503 Employment Type: Full Time Lead Data Engineers and Developers with clarity on execution, design, architecture and problem solving. Strong understanding of Cloud engineering concepts, particularly AWS. Participate in Sprint planning and squad operational activities to guide the team on right prioritization. SQL - Expert (Must have) AWS (Redshift/Lambda/Glue/SQS/SNS/Cloudwatch/Step function/CDK(or Terrafoam)) - Expert (Must have) Pyspark -Intermediate/Expert AWS Airflow - Intermediate (Nice of have) Python - Intermediate (Must have or Pyspark knowledge) Your future duties and responsibilities: Lead Data Engineers and Developers with clarity on execution, design, architecture and problem solving. Strong understanding of Cloud engineering concepts, particularly AWS. Participate in Sprint planning and squad operational activities to guide the team on right prioritization. Required qualifications to be successful in this role: Must have Skills: SQL - Expert (Must have) AWS (Redshift/Lambda/Glue/SQS/SNS/Cloudwatch/Step function/CDK(or Terrafoam)) - Expert (Must have) Pyspark -Intermediate/Expert Python - Intermediate (Must have or Pyspark knowledge) Good to have skills: AWS Airflow - Intermediate (Nice of have) Skills: Apache Spark Python SQL What you can expect from us: Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.
Posted 1 week ago
0.0 - 18.0 years
0 Lacs
Chennai, Tamil Nadu
On-site
Job Information Date Opened 07/28/2025 Job Type Full time Work Experience 10-18 years Industry Technology Number of Positions 1 City Chennai State/Province Tamil Nadu Country India Zip/Postal Code 600086 About Us Why a career in Zuci is unique! Constant attention is the source of our perfection. We fundamentally believe that building a career is all about consistency. If you jog or walk for a few days, it won’t bring in big results. If you do the right things every day for hundreds of days, you'll become lighter, more flexible, and you'll start enjoying your work and life more. Our customers trust us because of our unwavering consistency. Enabling us to deliver high-quality work and thereby give our customers and Team Zuci the best shot at extraordinary outcomes. Do you see the big picture? Is Digital Engineering your forte? Job Description Solution Architect – Data & AI (GCP + AdTech Focus) Experience : 15+ Years Employment Type: Full Time Role Overview: We are seeking a highly experienced Solution Architect with deep expertise in Google Cloud Platform (GCP) and a proven track record in architecting data and AI solutions for the AdTech industry. This role will be pivotal in designing scalable, real-time, and privacy-compliant solutions for programmatic advertising, customer analytics, and AI-driven personalization. The ideal candidate should blend strong technical architecture capabilities with deep domain expertise in advertising technology and digital marketing ecosystems. Key Responsibilities: Architect and lead GCP-native data and AI solutions tailored to AdTech use cases—such as real-time bidding, campaign analytics, customer segmentation, and look alike modeling. Design high-throughput data pipelines, audience data lakes, and analytics platforms leveraging GCP services like BigQuery, Dataflow, Pub/Sub, Cloud Storage, Vertex AI, etc. Collaborate with ad operations, marketing teams, and digital product owners to understand business goals and translate them into scalable and performant solutions. Integrate with third-party AdTech and MarTech platforms, including DSPs, SSPs, CDPs, DMPs, ad exchanges, and identity resolution systems. Ensure architectural alignment with data privacy regulations (GDPR, CCPA) and support consent management and data anonymization strategies. Drive technical leadership across multi-disciplinary teams (Data Engineering, MLOps, Analytics) and enforce best practices in data governance, model deployment, and cloud optimization. Lead discovery workshops, solution assessments, and architecture reviews during pre-sales and delivery cycles. GCP & AdTech Tech Stack Expertise: BigQuery, Cloud Pub/Sub, Dataflow, Dataproc, Cloud Composer (Airflow), Vertex AI, AI Platform, AutoML, Cloud Functions, Cloud Run, Looker, Apigee, Dataplex, GKE Deep understanding of programmatic advertising (RTB, OpenRTB), cookie-less identity frameworks, and AdTech/MarTech data flows. Experience integrating or building components like: Data Management Platforms (DMPs) Customer Data Platforms (CDPs) Demand-Side Platforms (DSPs) Ad servers, attribution engines, and real-time bidding pipelines Event-driven and microservices architecture using APIs, streaming pipelines, and edge delivery networks. Integration with platforms like Google Marketing Platform, Google Ads Data Hub, Snowplow, Segment, or similar. Strong understanding of IAM, data encryption, PII anonymization, and regulatory compliance (GDPR, CCPA, HIPAA if applicable). Experience with CI/CD pipelines (Cloud Build), Infrastructure as Code (Terraform), and MLOps pipelines using Vertex AI or Kubeflow. Strong experience in Python and SQL; familiarity with Scala or Java is a plus. Experience with version control (Git), Agile delivery, and architectural documentation tools.
Posted 1 week ago
0.0 - 3.0 years
0 Lacs
Hyderabad, Telangana
On-site
Location: Hyderabad, Telangana Time type: Full time Job level: Senior Associate Job type: Regular Category: Technology Consulting ID: JR111910 About us We are the leading provider of professional services to the middle market globally, our purpose is to instill confidence in a world of change, empowering our clients and people to realize their full potential. Our exceptional people are the key to our unrivaled, inclusive culture and talent experience and our ability to be compelling to our clients. You’ll find an environment that inspires and empowers you to thrive both personally and professionally. There’s no one like you and that’s why there’s nowhere like RSM. Snowflake Engineer We are currently seeking an experienced Snowflake Engineer for our Data Analytics team. This role involves designing, building, and maintaining our Snowflake cloud data warehouse. Candidates should have strong Snowflake, SQL, and cloud data solutions experience. Responsibilities Design, develop, and maintain efficient and scalable data pipelines in Snowflake, encompassing data ingestion, transformation, and loading (ETL/ELT). Implement and manage Snowflake security, including role-based access control, network policies, and data encryption. Develop and maintain data models optimized for analytical reporting and business intelligence. Collaborate with data analysts, scientists, and stakeholders to understand data requirements and translate them into technical solutions. Monitor and troubleshoot Snowflake performance, identifying and resolving bottlenecks. Automate data engineering processes using scripting languages (e.g., Python, SQL) and orchestration tools (e.g., Airflow, dbt). Designing, developing, and deploying APIs within Snowflake using stored procedures and user-defined functions (UDFs) Lead and mentor a team of data engineers and analysts, providing technical guidance, coaching, and professional development opportunities. Stay current with the latest Snowflake features and best practices. Contribute to the development of data engineering standards and best practices. Document data pipelines, data models, and other technical specifications. Qualifications Bachelor’s degree or higher in computer science, Information Technology, or a related field. A minimum of 5 years of experience in data engineering and management, including over 3 years of working with Snowflake. Strong understanding of data warehousing concepts, including dimensional modeling, star schemas, and snowflake schemas. Proficiency in SQL and experience with data transformation and manipulation. Experience with ETL/ELT tools and processes. Experience with Apache Iceberg. Strong analytical and problem-solving skills. Excellent communication and collaboration skills. Preferred qualifications Snowflake certifications (e.g., SnowPro Core Certification). Experience with scripting languages (e.g., Python) and automation tools (e.g., Airflow, dbt). Experience with cloud platforms (e.g., AWS, Azure, GCP). Experience with data visualization tools (e.g., Tableau, Power BI). Experience with Agile development methodologies. Experience with Snowflake Cortex, including Cortex Analyst, Arctic TILT, and Snowflake AI & ML Studio. At RSM, we offer a competitive benefits and compensation package for all our people. We offer flexibility in your schedule, empowering you to balance life’s demands, while also maintaining your ability to serve clients. Learn more about our total rewards at https://rsmus.com/careers/india.html. RSM does not tolerate discrimination and/or harassment based on race; colour; creed; sincerely held religious beliefs, practices or observances; sex (including pregnancy or disabilities related to nursing); gender (including gender identity and/or gender expression); sexual orientation; HIV Status; national origin; ancestry; familial or marital status; age; physical or mental disability; citizenship; political affiliation; medical condition (including family and medical leave); domestic violence victim status; past, current or prospective service in the Indian Armed Forces; Indian Armed Forces Veterans, and Indian Armed Forces Personnel status; pre-disposing genetic characteristics or any other characteristic protected under applicable provincial employment legislation. Accommodation for applicants with disabilities is available upon request in connection with the recruitment process and/or employment/partnership. RSM is committed to providing equal opportunity and reasonable accommodation for people with disabilities. If you require a reasonable accommodation to complete an application, interview, or otherwise participate in the recruiting process, please send us an email at careers@rsmus.com.
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
coimbatore, tamil nadu
On-site
As a Senior Data Engineer at our organization, you will play a crucial role in the Data Engineering team within the Enterprise Data & Analytics organization. Your primary responsibility will be to design, build, and maintain both batch and real-time data pipelines that cater to the needs of our enterprise, analyst communities, and downstream systems. Collaboration with data architects is essential to ensure that the data engineering solutions align with the long-term architecture objectives. You will be tasked with maintaining and optimizing the data infrastructure to facilitate accurate data extraction, transformation, and loading from diverse data sources. Developing ETL processes will be a key part of your role to extract and manipulate data effectively. Ensuring data accuracy, integrity, privacy, security, and compliance will be a top priority, and you will need to follow quality control procedures and adhere to SOX compliance standards. Monitoring data systems performance, implementing optimization strategies, improving operational practices and metrics, and mentoring junior engineers will also be part of your responsibilities. To be successful in this role, you should possess a Bachelor's degree in Computer Science, Information Systems, or a related field, along with a minimum of 5+ years of relevant experience in data engineering. Experience with cloud Data Warehouse solutions (such as Snowflake) and Cloud-based solutions (e.g., AWS, Azure, GCP), as well as exposure to Salesforce or any CRM system, will be beneficial. Proficiency in Advanced SQL, relational databases, database design, large data sets, distributed computing (Spark/Hive/Hadoop), object-oriented languages (Python, Java), scripting languages, data pipeline tools (Airflow), and agile methodology is required. Your problem-solving, communication, organizational skills, ability to work independently and collaboratively, self-starting attitude, stakeholder communication skills, and quick learning and adaptability will be crucial for excelling in this role. By following best practices, standards, and contributing to the maturity of data engineering practices, you will be instrumental in driving business transformation through data.,
Posted 1 week ago
0.0 - 3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
You Lead the Way. We’ve Got Your Back. With the right backing, people and businesses have the power to progress in incredible ways. When you join Team Amex, you become part of a global and diverse community of colleagues with an unwavering commitment to back our customers, communities and each other. Here, you’ll learn and grow as we help you create a career journey that’s unique and meaningful to you with benefits, programs, and flexibility that support you personally and professionally. At American Express, you’ll be recognized for your contributions, leadership, and impact—every colleague has the opportunity to share in the company’s success. Together, we’ll win as a team, striving to uphold our company values and powerful backing promise to provide the world’s best customer experience every day. And we’ll do it with the utmost integrity, and in an environment where everyone is seen, heard and feels like they belong. Join Team Amex and let's lead the way together. American Express has embarked on an exciting transformation driven by an energetic new team of an inclusive pool of candidates to give all an equal opportunity for growth. Service Operations is responsible for providing reliable platforms for hundreds of critical applications and utilities within American Express Primary focus is to provide technical expertise and tooling to ensure the highest level of reliability and availability for critical applications. Able to provide consultation and strategic recommendations by quickly assessing and remediating complex availability issues. Responsible for driving automation, efficiencies to increase quality, availability, and auto-healing of complex processes. Responsibilities include, but not limited to: The Ideal candidate will be responsible for Designing, Developing and maintaining data pipelines. Serving as a core member of an agile team that drives user story analysis and elaboration, designs and develops responsive web applications using the best engineering practices You will closely work with data scientists, analysts and other partners to ensure the flawless flow of data. You will be Building and optimize reports for analytical and business purpose. Monitor and solve data pipelines issues to ensure smooth operation. Implementing data quality checks and validation process to ensure the accuracy completeness and consistency of data Implementing data governance policies , access controls , and security measures to protect critical data and ensure compliance. Developing deep understanding of integrations with other systems and platforms within the supported domains. Bring a culture of innovation, ideas, and continuous improvement. Challenging status quo, demonstrate risk taking, and implement creative ideas Lead your own time, and work well both independently and as part of a team. Adopt emerging standards while promoting best practices and consistent framework usage. Work with Product Owners to define requirements for new features and plan increments of work. Minimum Qualifications BS or MS degree in computer science, computer engineering, or other technical subject area or equivalent 0 to 3 years of work experience At least 1 to 3 years of hands-on experience with SQL, including schema design, query optimization and performance tuning. Experience with distributed computing frameworks like Hadoop,Hive,Spark for processing large scale data sets. Proficiency in any of the programming language python, pyspark for building data pipeline and automation scripts. Understanding of cloud computing and exposure to Big Query and Airflow to execute DAGs. knowledge of CICD, GIT commands and deployment process. Strong analytical and problem-solving skills, with the ability to troubleshoot complex data issues and optimize data processing workflows Excellent communication and collaboration skills. We back our colleagues and their loved ones with benefits and programs that support their holistic well-being. That means we prioritize their physical, financial, and mental health through each stage of life. Benefits include: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
As a Machine Learning Engineer at Expedia Group, you will have the opportunity to work in a cross-functional geographically distributed team of Machine Learning engineers and ML Scientists. Your role will involve designing and coding large scale batch and real-time pipelines on the Cloud. You will be responsible for prototyping creative solutions quickly, developing minimum viable products, and collaborating with seniors and peers to implement the technical vision of the team. In this role, you will act as a point of contact for junior team members, offering advice and direction. You will actively participate in all phases of the end-to-end ML model lifecycle for enterprise applications projects, collaborating with a global team of data scientists, administrators, data analysts, data engineers, and data architects on production systems and applications. Additionally, you will work closely with cross-functional teams to integrate generative AI solutions into existing workflow systems. Your responsibilities will also include participating in code reviews to assess overall code quality and flexibility, defining, developing, and maintaining artifacts like technical design or partner documentation, as well as maintaining, monitoring, supporting, and improving solutions and systems with a focus on service excellence. To be successful in this role, you should have a degree in software engineering, computer science, informatics, or a similar field, with at least 5+ years of experience for Bachelor's degree holders or 3+ years for Master's degree holders. You should be comfortable programming in Python (Primary) and Scala (Secondary) and have hands-on experience with OOAD, design patterns, SQL, and NoSQL. Knowledge in big data technologies such as Spark, Hive, Hue, and Databricks is essential. Experience in developing and deploying Batch and Real-Time Inferencing applications is also required. Additionally, you should have a good understanding of machine learning pipelines and the ML Lifecycle, traditional ML algorithms, and Gen-AI tools/tech-stack. Experience with cloud services (e.g., AWS) and workflow orchestration tools (e.g., Airflow) is preferred. Passion for learning, especially in the areas of micro-services, system architecture, Data Science, and Machine Learning, as well as experience working with Agile/Scrum methodologies, are also desired. If you need assistance with any part of the application or recruiting process due to a disability, physical or mental health conditions, please reach out to our Recruiting Accommodations Team through the Accommodation Request. Expedia Group is committed to creating an inclusive and diverse work environment where everyone belongs and differences are celebrated.,
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France