Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 years
0 Lacs
Tamil Nadu, India
On-site
Job Title: Data Engineer About VXI VXI Global Solutions is a BPO leader in customer service, customer experience, and digital solutions. Founded in 1998, the company has 40,000 employees in more than 40 locations in North America, Asia, Europe, and the Caribbean. We deliver omnichannel and multilingual support, software development, quality assurance, CX advisory, and automation & process excellence to the world’s most respected brands. VXI is one of the fastest growing, privately held business services organizations in the United States and the Philippines, and one of the few US-based customer care organizations in China. VXI is also backed by private equity investor Bain Capital. Our initial partnership ran from 2012 to 2016 and was the beginning of prosperous times for the company. During this period, not only did VXI expand our footprint in the US and Philippines, but we also gained ground in the Chinese and Central American markets. We also acquired Symbio, expanding our global technology services offering and enhancing our competitive position. In 2022, Bain Capital re-invested in the organization after completing a buy-out from Carlyle. This is a rare occurrence in the private equity space and shows the level of performance VXI delivers for our clients, employees, and shareholders. With this recent investment, VXI has started on a transformation to radically improve the CX experience though an industry leading generative AI product portfolio that spans hiring, training, customer contact, and feedback. Job Description: We are seeking talented and motivated Data Engineers to join our dynamic team and contribute to our mission of harnessing the power of data to drive growth and success. As a Data Engineer at VXI Global Solutions, you will play a critical role in designing, implementing, and maintaining our data infrastructure to support our customer experience and management initiatives. You will collaborate with cross-functional teams to understand business requirements, architect scalable data solutions, and ensure data quality and integrity. This is an exciting opportunity to work with cutting-edge technologies and shape the future of data-driven decision-making at VXI Global Solutions. Responsibilities: Design, develop, and maintain scalable data pipelines and ETL processes to ingest, transform, and store data from various sources. Collaborate with business stakeholders to understand data requirements and translate them into technical solutions. Implement data models and schemas to support analytics, reporting, and machine learning initiatives. Optimize data processing and storage solutions for performance, scalability, and cost-effectiveness. Ensure data quality and integrity by implementing data validation, monitoring, and error handling mechanisms. Collaborate with data analysts and data scientists to provide them with clean, reliable, and accessible data for analysis and modeling. Stay current with emerging technologies and best practices in data engineering and recommend innovative solutions to enhance our data capabilities. Requirements: Bachelor's degree in Computer Science, Engineering, or a related field. Proven 8+ years' experience as a data engineer or similar role Proficiency in SQL, Python, and/or other programming languages for data processing and manipulation. Experience with relational and NoSQL databases (e.g., SQL Server, MySQL, Postgres, Cassandra, DynamoDB, MongoDB, Oracle), data warehousing (e.g., Vertica, Teradata, Oracle Exadata, SAP Hana), and data modeling concepts. Strong understanding of distributed computing frameworks (e.g., Apache Spark, Apache Flink, Apache Storm) and cloud-based data platforms (e.g., AWS Redshift, Azure, Google BigQuery, Snowflake) Familiarity with data visualization tools (e.g., Tableau, Power BI, Looker, Apache Superset) and data pipeline tools (e.g. Airflow, Kafka, Data Flow, Cloud Data Fusion, Airbyte, Informatica, Talend) is a plus. Understanding of data and query optimization, query profiling, and query performance monitoring tools and techniques. Solid understanding of ETL/ELT processes, data validation, and data security best practices Experience in version control systems (Git) and CI/CD pipelines. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills to work effectively with cross-functional teams. Join VXI Global Solutions and be part of a dynamic team dedicated to driving innovation and delivering exceptional customer experiences. Apply now to embark on a rewarding career in data engineering with us! Show more Show less
Posted 1 week ago
9.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About the team The Analytics team at smallcase operates as a central analytics function, catering to all product lines and business verticals. We work in a pod structure, with dedicated product and business pods, ensuring focused and high-impact analytics support across the organization. Currently, our team consists of 8 members, each handling multiple projects simultaneously, driving insights, and enabling data-driven decision-making. We thrive in a fast-paced environment, collaborating closely with stakeholders to solve complex problems. About the role We are looking for a detail-oriented and curious Analytics Intern to join our Central Analytics team. You will work closely with product managers, business stakeholders, and engineers to turn raw data into meaningful insights that drive decision-making across teams. What you’ll be doing Product Feature Support: Collaborate with Product Managers to support the discovery of new features, design clickstream analytics instrumentation, and test new product features for data accuracy and completeness. Analytics and Reporting: Monitor and analyze performance metrics, and provide detailed reports and actionable insights on both new features and existing KPIs. Automation: Automate recurring business reports and data pipelines using Python and scheduling tools like Airflow, Jupyter Notebooks Collaboration : Collaborate with internal stakeholders, product teams and engineers on tracking and analytics requirements. Build dashboards and reports using tools like Excel, SQL, and Visualisation platforms (e.g., Tableau, Redash, Amplitude). Assist in collecting, cleaning, and analyzing data from multiple sources (e.g., product, marketing, user engagement). What we're looking for Pursuing/completed a degree in a quantitative field (e.g., Engineering, Mathematics, Statistics, Economics, Computer Science). Basic proficiency in SQL and Excel/Google Sheets Interest or basic exposure to analytics tools such as Python/R, Tableau, Google Analytics, Amplitude, Mixpanel. Strong problem-solving skills and attention to detail. Ability to communicate clearly and work independently. Nice to-haves Prior internship/project work in data or analytics (specifically clickstream analytics) Experience with tools such AWS, Tableau or clickstream analytics tools like Mixpanel or Amplitude. Basic Knowledge on NoSQL Querying on mongoDb databases. Good communication & interpersonal skills About smallcase At smallcase, we are changing how India invests. smallcase is a leading provider of investment products & platforms to over 10 million Indians. We're a young, driven team of 250+ headquartered in Bangalore. smallcase was founded in July 2015 by three IIT Kharagpur graduates, Vasanth Kamath , Anugrah Shrivastava and Rohan Gupta . smallcase has been focused on offering innovative investing experiences & technology. Our platforms are used by over 300 of India's largest financial brands and most respected institutions. We are backed by world-class investors, including top-tier funds, institutions and operators from the capital markets space who believe in our mission of enabling better financial futures for every Indian. Life at smallcase We are not just building a business, we are making a long-lasting impact both in the wealth & assets landscape with our unique technology & expanding ecosystem. Over the last 9 years, our team, products, and platforms have grown, and so have our ambitions. Innovation remains at the heart of what we do. Our other core values are transparency, integrity & long-term thinking. Our key asset has always been our people, and we empower individuals to build and do some of the best work in their lifetimes at smallcase. Flexibility, ownership and constant feedback loops are some of the ways we keep evolving the working environment. Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Requisition ID # 25WD87938 Position Overview Autodesk is looking for diverse engineering candidates to join the Compliance team, a collection of systems and teams focused on detecting and exposing non-compliant users of Autodesk software. This is a key strategic initiative for the company. As a Data Engineer, you will contribute to improving critical data processing & analytics pipelines. You will work on challenging problems to enhance the platform’s reliability, resiliency, and scalability. We are looking for someone who is detail and quality oriented, and excited about the prospects of having a big impact with data at Autodesk. Our tech stack includes Hive, Spark, Presto, Jenkins, Snowflake, PowerBI, Looker, and various AWS services. You will report to Senior Manager, Software Development and this is a hybrid position in Bengaluru. Responsibilities You will need a product-focused mindset. It is essential for you to understand business requirements and help build systems that can scale and extend to accommodate those needs Break down moderately complex problems, contribute to documenting technical solutions, and assist in making fast, iterative improvements Help build and maintain data infrastructure that powers batch and real-time data processing of billions of records Assist in automating cloud infrastructure, services, and observability Contribute to developing CI/CD pipelines and testing automation Collaborate with data engineers, data scientists, product managers, and data stakeholders to understand their needs and promote best practices You have a growth mindset. You will support identifying business challenges and opportunities for improvement and help solve them using data analysis and data mining You will support analytics and provide insights around product usage, campaign performance, funnel metrics, segmentation, conversion, and revenue growth You will contribute to ad-hoc analysis, long-term projects, reports, and dashboards to find new insights and to measure progress in key initiatives You will work closely with business stakeholders to understand and maintain focus on their analytical needs, including identifying critical metrics and KPIs You will partner with different teams within the organization to understand business needs and requirements You will contribute to presentations that help distill complex problems into clear insights Minimum Qualifications 2–4 years of relevant industry experience in big data systems, data processing, and SQL databases 2+ years of coding experience in Spark DataFrames, Spark SQL, and PySpark 2+ years of hands-on programming skills, able to write modular, maintainable code, preferably in Python & SQL Good understanding of SQL, dimensional modeling, and analytical big data warehouses like Hive and Snowflake Familiar with ETL workflow management tools like Airflow 1–2 years of experience in building reports and dashboards using BI tools, Knowledge of Looker is a plus Exposure to version control and CI/CD tools like Git and Jenkins CI Experience working with data in notebook environments like Jupyter, EMR Notebooks, or Apache Zeppelin Bachelor’s degree in Computer Science, Engineering or a related field, or equivalent training, fellowship, or work experience Learn More About Autodesk Welcome to Autodesk! Amazing things are created every day with our software – from the greenest buildings and cleanest cars to the smartest factories and biggest hit movies. We help innovators turn their ideas into reality, transforming not only how things are made, but what can be made. We take great pride in our culture here at Autodesk – our Culture Code is at the core of everything we do. Our values and ways of working help our people thrive and realize their potential, which leads to even better outcomes for our customers. When you’re an Autodesker, you can be your whole, authentic self and do meaningful work that helps build a better future for all. Ready to shape the world and your future? Join us! Salary transparency Salary is one part of Autodesk’s competitive compensation package. Offers are based on the candidate’s experience and geographic location. In addition to base salaries, we also have a significant emphasis on discretionary annual cash bonuses, commissions for sales roles, stock or long-term incentive cash grants, and a comprehensive benefits package. Diversity & Belonging We take pride in cultivating a culture of belonging and an equitable workplace where everyone can thrive. Learn more here: https://www.autodesk.com/company/diversity-and-belonging Are you an existing contractor or consultant with Autodesk? Please search for open jobs and apply internally (not on this external site). Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Join MSBC as an AI/ML Engineer – Deliver Real-World Intelligent Systems At MSBC, we design and implement practical AI solutions that solve real business problems across industries. As an AI/ML Engineer, you will play a key role in building and deploying machine learning models and data-driven systems that are used in production. This role is ideal for engineers with solid hands-on experience delivering end-to-end AI/ML projects. Key Tools and Frameworks • Programming Languages – Python (FastAPI, Flask, Django) • Machine Learning Libraries – scikit-learn, XGBoost, TensorFlow or PyTorch • Data Pipelines – Pandas, Spark, Airflow • Model Deployment – FastAPI, Flask, Docker, MLflow • Cloud Platforms – AWS, GCP, Azure (any one) • Version Control – Git Key Responsibilities • Design and develop machine learning models to address business requirements. • Build and manage data pipelines for training and inference workflows. • Train, evaluate, and optimise models for accuracy and performance. • Deploy models in production environments using containerised solutions. • Work with structured and unstructured data from various sources. • Ensure robust monitoring, retraining, and versioning of models. • Contribute to architecture and design discussions for AI/ML systems. • Document processes, results, and deployment procedures clearly. • Collaborate with software engineers, data engineers, and business teams. Required Skills and Qualifications • 4+ years of hands-on experience delivering ML solutions in production environments. • Strong programming skills in Python and deep understanding of ML fundamentals. • Experience with supervised and unsupervised learning, regression, classification, and clustering techniques. • Practical experience in model deployment and lifecycle management. • Good understanding of data preprocessing, feature engineering, and model evaluation. • Experience with APIs, containers, and cloud deployment. • Familiarity with CI/CD practices and version control. • Ability to work independently and deliver results in fast-paced projects. • Excellent English communication skills for working with distributed teams. • Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, or a related field. MSBC Group has been a trusted technology partner for over 20 years, delivering the latest systems and software solutions for financial services, manufacturing, logistics, construction, and startup ecosystems. Our expertise includes Accessible AI, Custom Software Solutions, Staff Augmentation, Managed Services, and Business Process Outsourcing. We are at the forefront of developing advanced AI-enabled services and supporting transformative projects. Operating globally, we drive innovation, making us a trusted AI and automation partner. Show more Show less
Posted 1 week ago
6.0 years
0 Lacs
Bangalore Urban, Karnataka, India
On-site
We at MakeMyTrip understand that every traveller is unique and being the leading OTA in India we have the leverage to redefine the travel booking experience to meet their need. If you love to travel and want to be a part of a dynamic team that works on personalizing every user's journey, then look no further. We are looking for a brilliant mind like yours to join our Data Platform team to build exciting data products at a scale where we solve for industry best and fault-tolerant feature stores, real-time data pipelines, catalogs, and much more. Hands-on: Spark, Scala Technologies: Spark, Aerospike, DataBricks, Kafka, Debezium, EMR, Athena, Glue, RocksDB, Redis, Airflow, MySQL, and any other data sources (e.g. Mongo, Neo4J, etc) used by other teams. Location: Gurgaon/Bengaluru Experience: 6+ years Industry Preference: E-Commerce Show more Show less
Posted 1 week ago
5.0 - 10.0 years
7 - 12 Lacs
Bengaluru
Work from Office
We are looking for a Data Engineer to join our team and help us to improve the platform that supports one of the best experimentation tools in the world. You will work side by side with other data engineers and site reliability engineers to improve the reliability, scalability, maintenance and operations of all the data products that are part of the experimentation tool at Booking.com. Your day to day work includes but is not limited to: maintenance and operations of data pipelines and products that handles data at big scale; the development of capabilities for monitoring, alerting, testing and troubleshooting of the data ecosystem of the experiment platform; and the delivery of data products that produce metrics for experimentation at scale. You will collaborate with colleagues in Amsterdam to achieve results the right way. This will include engineering managers, product managers, engineers and data scientists. Key Responsibilities and Duties Take ownership of multiple data pipelines and products and provide innovative solutions to reduce the operational workload required to maintain them Rapidly developing next-generation scalable, flexible, and high-performance data pipelines. Contribute to the development of data platform capabilities such as testing, monitoring, debugging and alerting to improve the development environment of data products Solve issues with data and data pipelines, prioritizing based on customer impact. End-to-end ownership of data quality in complex datasets and data pipelines. Experiment with new tools and technologies, driving innovative engineering solutions to meet business requirements regarding performance, scaling, and data quality. Provide self-organizing tools that help the analytics community discover data, assess quality, explore usage, and find peers with relevant expertise. Serve as the main point of contact for technical and business stakeholders regarding data engineering issues, such as pipeline failures and data quality concerns Role requirements Minimum 5 years of hands-on experience in data engineering as a Data Engineer or as a Software Engineer developing data pipelines and products. Bachelors degree in Computer Science, Computer or Electrical Engineering, Mathematics, or a related field or 5 years of progressively responsible experience in the specialty as equivalent Solid experience in at least one programming language. We use Java and Python Experience building production data pipelines in the cloud, setting up data-lakes and server-less solutions Hands-on experience with schema design and data modeling Experience designing systems E2E and knowledge of basic concepts (lb, db, caching, NoSQL, etc) Knowledge of Flink, CDC, Kafka, Airflow, Snowflake, DBT or equivalent tools Practical experience building data platform capabilities like testing, alerting, monitoring, debugging, security Experience working with big data. Experience working with teams located in different timezones is a plus Experience with experimentation, statistics and A/B testing is a plus
Posted 1 week ago
2.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Data Engineer, Data Engineering & Analytics Location: India Department: IT About Company Rapid7 is seeking a Data Engineer, Data Engineering & Analytics to join a high-performing data engineering and reporting team. This role is responsible for participating in the management of a robust Snowflake infrastructure, data modeling in a modern tech stack, and optimizing the company’s Tableau reporting suite, ensuring that all business units have access to timely, accurate, and actionable data. This is a critical position that will help to develop and maintain the data strategy, architecture, and analytics capabilities at Rapid7, driving insights that enable business growth. The ideal candidate will have experience in data engineering, analytics, and business intelligence, with equal amounts of business and technical acumen. About Role Implement data modeling best practices to enhance data accessibility and reporting capabilities. Ensure data integrity, security, and compliance with industry standards and regulations. Document plans and results in user-stories, issues, PRs, the team’s handbook - following the tradition of documentation first! Implement the Corp Data philosophy in everything you do. Craft code that meets our internal standards for style, maintainability, and best practices for a high-scale database environment. Maintain and advocate for these standards through code review. Collaborate with IT and DevOps teams to optimize cloud infrastructure and data governance policies. Manage and enhance the existing Tableau reporting suite, ensuring self-service analytics and actionable insights for stakeholders. Design, develop, and extend DBT code repository to extend the Enterprise Dimensional Warehouse capabilities and infrastructure Develop and maintain a single source of truth for business metrics, ensuring consistency across reporting platforms. Approve data model changes as a Data Team Reviewer and code owner for specific database and data model schemas. Provide data modeling expertise to all Rapid7 teams through code reviews, pairing, and training to help deliver optimal, DRY, and scalable database designs and queries in Snowflake and in Tableau. Research and implement emerging trends in data analytics, visualization, and engineering, bringing innovative solutions to the organization. Align to data governance frameworks, policies, and best practices, in collaboration with existing teams, policies, and governance frameworks. Identify and lead opportunities for new data initiatives, ensuring Rapid7 remains data-driven and insights-powered. What You Bring to the Role Ability to thrive in a fast-paced hybrid organization. Comfort working in a highly agile, intensely iterative environment. Demonstrated capacity to clearly and concisely communicate complex business activities, technical requirements, and recommendations. 2+ years of experience in data engineering, analytics, or business intelligence. 2+ years experience designing, implementing, operating, and extending enterprise dimensional data models. 2+ years experience building reports and dashboards in Tableau and/or other similar data visualization tools. Experience in DBT modeling and understanding modular, performant models. Solid understanding of Snowflake, SQL, and data warehouse management. Understanding of ETL/ELT processes, data pipelines, and cloud-based data architectures. Familiarity with modern data stacks (DBT, Airflow, Fivetran, Matillion, or similar tools). Ability to manage data governance, security, and compliance requirements (SOC 2, GDPR, etc.). A passion for continuous learning, innovation, and leveraging data for business impact. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Experienced with AWS, with a strong understanding of cloud services and infrastructure. Knowledgeable in Big Data concepts and experienced with AWS Glue, including setting up jobs, data cataloging, and managing crawlers. Proficient in using and maintaining Apache Airflow for workflow management and Terraform for infrastructure automation. Skilled in Python for scripting and automation tasks. Independent and proactive in solving problems and troubleshooting issues. Show more Show less
Posted 1 week ago
0 years
0 Lacs
India
On-site
Who We Are? A part of the Cerebrent Group, Millipixels Interactive is an experience-led, interactive solutions company that collaborates with startups and enterprise clients to deliver immersive brand experiences and transformational technology projects. Our Offshore Innovation Center model allows clients to leverage our cost differentiators and innovation to redefine what's possible. We are detail-oriented, structured, and committed to delivering value with every engagement. With clients like Facebook, Google, McGraw-Hill, and more, you will be part of a team known for innovation and technical capability. The Role We are seeking a talented LangGraph Developer to join our AI development team. In this role, you will design and implement stateful, multi-agent systems using LangGraph , enabling advanced language model orchestration and agent collaboration. You’ll work at the cutting edge of LLM application development, creating sophisticated workflows that require reasoning, memory, and structured interactions across agents. Key Responsibilities Design and develop stateful agent workflows using LangGraph for a variety of use cases. Collaborate with prompt engineers, data scientists, and product teams to build modular and scalable LLM applications. Implement graph-based architectures that coordinate language models, tools, and memory. Integrate external APIs, databases, and vector stores into LangGraph workflows. Optimize performance, latency, and cost of multi-agent systems. Test, monitor, and debug LangGraph pipelines in production environments. Document architecture, nodes, and agent behavior clearly for internal teams. Required Technical Skills Strong Python programming skills. Experience with LangGraph and LangChain frameworks. Familiarity with LLMs like GPT-4, Claude, or open-source models (e.g., Mistral, LLaMA). Experience with state machines, DAGs, or workflow orchestration tools (e.g., Airflow, Prefect). Knowledge of prompt engineering, agent design, and memory architectures. Experience with APIs, webhooks, and external tool integration. Preferred Familiarity with vector databases (e.g., Pinecone, Weaviate, Chroma). Experience building multi-agent systems or autonomous agents (e.g., AutoGPT, CrewAI, AgentOps). Background in AI/ML, NLP, or computational linguistics. Benefits of Working at Millipixels Choose your working times - focus on delivering targets, not on time spent Medical Health Insurance - Company Paid Health insurance for ₹500,000. Option to extend to the Spouse and/or other immediate dependents on cost. Regular Financial, Tax-Saving, and Healthcare Advice Sessions from Experts. Generous paid vacation over the year. Show more Show less
Posted 1 week ago
5.0 - 8.0 years
5 - 8 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Role: Data Engineer Key Skill: Pyspark, Cloudera Data Platform, Big data Hadoop, Hive, Kafka Responsibilities Data Pipeline Development: Design, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, ensuring data integrity and accuracy. Data Ingestion: Implement and manage data ingestion processes from a variety of sources (e.g., relational databases, APIs, file systems) to the data lake or data warehouse on CDP. Data Transformation and Processing: Use PySpark to process, cleanse, and transform large datasets into meaningful formats that support analytical needs and business requirements. Performance Optimization: Conduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes. Data Quality and Validation: Implement data quality checks, monitoring, and validation routines to ensure data accuracy and reliability throughout the pipeline. Automation and Orchestration: Automate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem. Technical Skills 3+ years of experience as a Data Engineer, with a strong focus on PySpark and the Cloudera Data Platform PySpark: Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Cloudera Data Platform: Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Data Warehousing: Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Big Data Technologies: Familiarity with Hadoop, Kafka, and other distributed computing tools. Orchestration and Scheduling: Experience with Apache Oozie, Airflow, or similar orchestration frameworks. Scripting and Automation: Strong scripting skills in Linux.
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Responsibilities will include Functional Expertise Collaborating closely with multiple teams to translate the requirements into technical specifications. Offering clear technical guidance and direction to ensure solutions meet user and tech requirements. Leading technical discussions, code reviews to maintain code quality, identify improvement opportunities, and ensure adherence to standards. Staying updated on the latest data engineering trends and applying them to solve complex challenges. Problem Solving & Communication : Strong analytical and problem-solving skills. Excellent communication and collaboration skills. Ability to work independently and as part of a team. Providing guidance and mentorship to other team members , fostering their professional growth and skill development. Experience working with Fintech institutions is a plus. Qualification & Experience (type & industry) B.Tech Degree Skills & know-how Experience level: 3-5 years Minimum 2 years relevant experience in AWS Cloud Data Warehousing experience - Cloud data warehouse - Redshift/SQL In-memory framework exp - Pyspark Data engineering pipeline use case experience : Ingestion of data from different sources to Cloud file system(S3 buckets) and then transforming/processing the data using AWS Glue and finally loading the same to cloud warehouse for Data analytics Big data use cases : Exposure to huge data volumes involving TBs of data for storage/migration/processing. Programming experience in Python Familiarity with Reports/Dashboards using cloud native applications Knowledge of data pipeline orchestration using Airflow - good to have. Understanding of API development Show more Show less
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role - Data Engineer (Backend Java & Python) Location - Chennai Work Modal - Hybrid We are looking for a highly skilled Backend Data Engineer to join our growing FinTech team. In this role, you will design and implement robust data models and architectures, build scalable data ingestion pipelines, and ensure data quality across financial datasets. You will play a key role in enabling data-driven decision-making by developing efficient and secure data infrastructure tailored to the fast-paced FinTech environment. Key Responsibilities: Design and implement scalable data models and data architecture to support financial analytics, risk modeling, and regulatory reporting. Build and maintain data ingestion pipelines using Python or Java to process high-volume, high-velocity financial data from diverse sources. Lead data migration efforts from legacy systems to modern cloud-based platforms. Develop and enforce data validation processes to ensure accuracy, consistency, and compliance with financial regulations. Create and manage task schedulers to automate data workflows and ensure timely data availability. Collaborate with product, engineering, and data science teams to deliver reliable and secure data solutions. Optimize data processing for performance, scalability, and cost-efficiency in a cloud environment. Required Skills & Qualifications: Proficiency in Python and/or Java for backend data engineering tasks. Strong experience in data modelling , ETL/ELT pipeline development , and data architecture . Hands-on experience with data migration and transformation in financial systems. Familiarity with task scheduling tools (e.g., Apache Airflow, Cron, Luigi). Solid understanding of SQL and experience with relational and NoSQL databases. Knowledge of data validation frameworks and best practices in financial data quality. Experience with cloud platforms (AWS, GCP, or Azure), especially in data services. Understanding of data security , compliance , and regulatory requirements in FinTech. Preferred Qualifications: Experience with big data technologies (e.g., Spark, Kafka, Hadoop). Familiarity with CI/CD pipelines , containerization (Docker), and orchestration (Kubernetes). Exposure to financial data standards (e.g., FIX, ISO 20022) and regulatory frameworks (e.g., GDPR, PCI-DSS). Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
🚀 We're Hiring: Product Manager (Tech) - Chennai 🚀 Are you a technically strong Product Manager with a passion for driving software strategy and execution? Join our dynamic team in Chennai to shape the future of our products while ensuring top-notch quality, compliance, and customer impact! 📍 Role: Product Manager (Tech) 🌍 Location: Chennai (Hybrid/On-site) 💼 Experience: 3+ years 🔍 What You’ll Do: Define product roadmaps, strategy, and prioritization for high-impact software solutions. Manage backlog grooming, sprint planning, and execution in JIRA. Collaborate closely with engineering teams to ensure technical excellence. Uphold compliance, security, and product health (e.g., using tools like Cycode, Checkmarx, Fossa). Work with cutting-edge tech: GCP, Cloud Run, Airflow, BigQuery, Terraform, LLMs, and more. Communicate effectively with stakeholders to align business and technical goals. 🛠 Skills You Bring: ✔ Must-Have: Strong product management fundamentals (3+ years of experience). Technical proficiency in Python and agile tools like JIRA. Ability to bridge engineering, business, and compliance needs. ✔ Nice-to-Have: Experience with GCP, Angular, Airflow, BigQuery, Terraform, or Dynatrace. Familiarity with security/compliance tools (Checkmarx, Fossa, Cycode). Exposure to AI/LLM-driven products is a plus! 🌟 Why Join Us? ✅ Work on scalable, high-impact products with a talented team. ✅ Opportunity to shape technical strategy in a fast-paced environment. ✅ Competitive compensation, growth opportunities, and a collaborative culture. 📩 Interested? Apply now or tag someone who’d be a great fit! #ProductManagement #TechJobs #ChennaiHiring #ProductManager #Python #GCP #JIRA #WeAreHiring Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Company Description Innovatics helps businesses conquer their toughest challenges with advanced analytics and AI. Specializing in transformation and human-centered experiences, Innovatics turns complexity into clarity and uncertainties into data-driven opportunities. Our team of passionate data analytics and AI consultants work across the USA, Australia, Canada, and India to deliver tangible results. We provide seamless end-to-end data analytics, data strategy, data engineering, and AI consulting services. Role Description This is a full-time, on-site role for a Data Engineer located in Ahmedabad. The Data Engineer will be responsible for designing, implementing, and managing data pipelines, building and maintaining data warehousing solutions, and ensuring efficient ETL processes. The role also involves data modeling, performing data analytics, and collaborating with team members to deliver data-driven solutions and insights to support business decisions. Qualifications Skills in Data Engineering and Data Modeling Experience with Extract, Transform, Load (ETL) processes Proficiency in Data Warehousing Knowledge of Data Analytics Strong problem-solving and analytical skills Excellent communication and teamwork abilities Bachelor's degree in Computer Science, Information Technology, or a related field Experience in the AI and advanced analytics field is a plus Technical Skills: 4+ years of experience in a Data Engineer role, Experience with object-oriented/object function scripting languages: Python, Scala, Golang, Java, etc. Experience with Big data tools such as Spark, Hadoop/ Kafka/ Airflow/Hive Experience with Streaming data: Spark/Kinesis/Kafka/Pubsub/Event Hub Experience with GCP/Azure data factory/AWS Strong in SQL Scripting Experience with ETL tools Knowledge of Snowflake Data Warehouse Knowledge of Orchestration frameworks: Airflow/Luig Good to have knowledge of Data Quality Management frameworks Good to have knowledge of Master Data Management Self-learning abilities are a must Familiarity with upcoming new technologies is a strong plus. Should have a bachelor's degree in big data analytics, computer engineering, or a related field Show more Show less
Posted 1 week ago
6.0 years
0 Lacs
Sanganer, Rajasthan, India
On-site
Unlock yourself. Take your career to the next level. At Atrium, we live and deliver at the intersection of industry strategy, intelligent platforms, and data science — empowering our customers to maximize the power of their data to solve their most complex challenges. We have a unique understanding of the role data plays in the world today and serve as market leaders in intelligent solutions. Our data-driven, industry-specific approach to business transformation for our customers places us uniquely in the market. Who are you? You are smart, collaborative, and take ownership to get things done. You love to learn and are intellectually curious in business and technology tools, platforms, and languages. You are energized by solving complex problems and bored when you don’t have something to do. You love working in teams and are passionate about pulling your weight to make sure the team succeeds. What will you be doing at Atrium? In this role, you will join the best and brightest in the industry to skillfully push the boundaries of what’s possible. You will work with customers to make smarter decisions through innovative problem-solving using data engineering, Analytics, and systems of intelligence. You will partner to advise, implement, and optimize solutions through industry expertise, leading cloud platforms, and data engineering. As a Snowflake Data Engineering Lead , you will be responsible for expanding and optimizing the data and data pipeline architecture, as well as optimizing data flow and collection for cross-functional teams. You will support the software developers, database architects, data analysts, and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. In This Role, You Will Lead the design and architecture of end-to-end data warehousing and data lake solutions, focusing on the Snowflake platform, incorporating best practices for scalability, performance, security, and cost optimization Assemble large, complex data sets that meet functional / non-functional business requirements Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Lead and mentor both onshore and offshore development teams, creating a collaborative environment Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, DBT, Python, AWS, and Big Data tools Development of ELT processes to ensure timely delivery of required data for customers Implementation of Data Quality measures to ensure accuracy, consistency, and integrity of data Design, implement, and maintain data models that can support the organization's data storage and analysis needs Deliver technical and functional specifications to support data governance and knowledge sharing In This Role, You Will Have Bachelor's degree in Computer Science, Software Engineering, or equivalent combination of relevant work experience and education 6+ years of experience delivering consulting services to medium and large enterprises. Implementations must have included a combination of the following experiences: Data Warehousing or Big Data consulting for mid-to-large-sized organizations 3+ years of experience specifically with Snowflake, demonstrating deep expertise in its core features and advanced capabilities Strong analytical skills with a thorough understanding of how to interpret customer business needs and translate those into a data architecture SnowPro Core certification is highly desired Hands-on experience with Python (Pandas, Dataframes, Functions) Strong proficiency in SQL (Stored Procedures, functions), including debugging, performance optimization, and database design Strong Experience with Apache Airflow and API integrations Solid experience in any one of the ETL/ELT tools (DBT, Coalesce, Wherescape, Mulesoft, Matillion, Talend, Informatica, SAP BODS, DataStage, Dell Boomi, etc.) Nice to have: Experience in Docker, DBT, data replication tools (SLT, Fivetran, Airbyte, HVR, Qlik, etc), Shell Scripting, Linux commands, AWS S3, or Big data technologies Strong project management, problem-solving, and troubleshooting skills with the ability to exercise mature judgment Enthusiastic, professional, and confident team player with a strong focus on customer success who can present effectively even under adverse conditions Strong presentation and communication skills Next Steps Our recruitment process is highly personalized. Some candidates complete the hiring process in one week, others may take longer, as it’s important we find the right position for you. It's all about timing and can be a journey as we continue to learn about one another. We want to get to know you and encourage you to be selective - after all, deciding to join a company is a big decision! At Atrium, we believe a diverse workforce allows us to match our growth ambitions and drive inclusion across the business. We are an equal opportunity employe,r and all qualified applicants will receive consideration for employment. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description PayPay is looking for an experienced Cloud-Based AI and ML Engineer. This role involves leveraging cloud-based AI/ML Services to build infrastructure as well as developing, deploying, and maintaining ML models, collaborating with cross-functional teams, and ensuring scalable and efficient AI solutions particularly on Amazon Web Services (AWS). Main Responsibilities 1. Cloud Infrastructure Management : - Architect and maintain cloud infrastructure for AI/ML projects using AWS tools. - Implement best practices for security, cost management, and high-availability. - Monitor and manage cloud resources to ensure seamless operation of ML services. 2. Model Development and Deployment : - Design, develop, and deploy machine learning models using AWS services such as SageMaker. - Collaborate with data scientists and data engineers to create scalable ML workflows. - Optimize models for performance and scalability on AWS infrastructure. - Implement CI/CD pipelines to streamline and accelerate the model development and deployment process. - Set up a cloud-based development environment for data engineers and data scientists to facilitate model development and exploratory data analysis - Implement monitoring, logging, and observability to streamline operations and ensure efficient management of models deployed in production. 3. Data Management : - Work with structured and unstructured data to train robust ML models. - Use AWS data storage and processing services like S3, RDS, Redshift, or DynamoDB. - Ensure data integrity and compliance with set Security regulations and standards. 4. Collaboration and Communication : - Collaborate with cross-functional teams including DevOps, Data Engineering, and Product Management teams. - Communicate technical concepts effectively to non-technical stakeholders. 5. Continuous Improvement and Innovation : - Stay updated with the latest advancements in AI/ML technologies and AWS services. - Provide through Automations means for developers to easily develop and deploy their AI/ML models on AWS. Tech Stack - AWS: - VPC, EC2, ECS, EKS, Lambda, MWAA, RDS, ElastiCache, DynamoDB, Opensearch, S3, CloudWatch, Cognito, SQS, KMS, Secret Manager, KMS, MSK,Amazon Kinesis, CodeCommit, CodeBuild, CodeDeploy, CodePipeline, AWS Lake Formation, AWS Glue, SageMaker and other AI Services. - Terraform, Github Actions, Prometheus, Grafana, Atlantis - OSS (Administration experience on these tools) - Jupyter, MLFlow, Argo Workflows, Airflow Required Skills and Experiences - More than 5+ years of technical experience in cloud-based infrastructure with a focus on AI and ML platforms - Extensive technical hands-on experience with computing, storage, and analytical services on AWS. - Demonstrated skill in programming and scripting languages, including Python, Shell Scripting, Go, and Rust. - Experience with infrastructure as code (IAC) tools in AWS, such as Terraform, CloudFormation, and CDK. - Proficient in Linux internals and system administration. - Experience in production level infrastructure change management and releases for business-critical systems. - Experience in Cloud infrastructure and platform systems availability, performance and cost management. - Strong understanding of cloud security best practices and payment industry compliance standards. - Experience with cloud services monitoring, detection, and response, as well as performance tuning and cost control. - Familiarity with cloud infrastructure service patching and upgrades. - Excellent oral, written, and interpersonal communication skills. Preferred Qualifications - Bachelor’s degree and above in a technology related field - Experience with other cloud service providers (e.g GCP, Azure) - Experience with Kubernetes - Experience with Event-Driven Architecture (Kafka preferred) - Experience using and contributing to Open Source tools - Experience in managing IT compliance and security risk - Published papers / blogs / articles - Relevant and verifiable certifications Remarks *Please note that you cannot apply for PayPay (Japan-based jobs) or other positions in parallel or in duplicate. PayPay 5 senses Please refer PayPay 5 senses to learn what we value at work. Working Conditions Employment Status Full Time Office Location Gurugram (Wework) ※The development center requires you to work in the Gurugram office to establish the strong core team. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description PayPay's rapid growth necessitates the expansion of its product teams and underscores the critical need for a resilient Data Engineering Platform. This platform is vital to support our increasing business demands. The Data Pipeline team is tasked with creating, deploying, and managing this platform, utilizing leading technologies like Databricks, Delta Lake, Spark, PySpark, Scala, and the AWS suite. We are actively seeking skilled Data Engineers to join our team and contribute to scaling our platform across the organization. Main Responsibilities Create and manage robust data ingestion pipelines leveraging Databricks, Airflow, Kafka, and Terraform. Ensure high performance, reliability, and efficiency by optimizing large-scale data pipelines. Develop data processing workflows using Databricks, Delta Lake, and Spark technologies. Maintain and improve the Data Lakehouse, utilizing Unity Catalog for efficient data management and discovery. Construct automation, frameworks, and enhanced tools to streamline data engineering workflows. Collaborate across teams to facilitate smooth data flow and integration. Enforce best practices in observability, data governance, security, and regulatory compliance Qualifications Minimum 5 years as a Data Engineer or similar role. Hands-on experience with Databricks, Delta Lake, Spark, and Scala. Proven ability to design, build, and operate Data Lakes or Data Warehouses. Proficiency with Data Orchestration tools (Airflow, Dagster, Prefect). Familiarity with Change Data Capture tools (Canal, Debezium, Maxwell). Strong command of at least one primary language (Scala, Python, etc.) and SQL. Experience with data catalog and metadata management (Unity Catalog, Lakeformation). Experience in Infrastructure as Code (IaC) using Terraform. Excellent problem-solving and debugging abilities for complex data challenges. Strong communication and collaboration skills. Capability to make informed decisions, learn quickly, and consider complex technical contexts. Show more Show less
Posted 1 week ago
6.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Job Title: Data Engineering Automation Tester Experience: 6+ Years Location: Gurgaon, India Mandatory Skills Strong experience in distributed computing (Spark) and software development. Proficiency in working with databases (preferably Postgres). Solid understanding of Object-Oriented Programming and development principles. Experience in Agile development methodologies (Scrum/Kanban). Hands-on experience with version control tools (preferably Git). Exposure to CI/CD pipelines. Strong background in automated testing including Integration/Delta, Load, and Performance testing. Extensive experience in database testing (preferably Postgres). Good To Have Skills Exposure to Docker and containerized environments. Experience with Spark-Scala. Knowledge of Data Engineering principles. Experience in Python and .NET Core. Familiarity with Kubernetes. Experience with Airflow. Working knowledge of cloud platforms (GCP and Azure). Experience with TeamCity CI and Octopus Deploy. Show more Show less
Posted 1 week ago
15.0 years
0 Lacs
India
Remote
Job Title: Data Engineer Lead - AEP Location: Remote Experience Required: 12–15 years overall experience 8+ years in Data Engineering 5+ years leading Data Engineering teams Cloud migration & consulting experience (GCP preferred) Job Summary: We are seeking a highly experienced and strategic Lead Data Engineer with a strong background in leading data engineering teams, modernizing data platforms, and migrating ETL pipelines and data warehouses to Google Cloud Platform (GCP) . You will work directly with enterprise clients, architecting scalable data solutions, and ensuring successful delivery in high-impact environments. Key Responsibilities: Lead end-to-end data engineering projects including cloud migration of legacy ETL pipelines and Data Warehouses to GCP (BigQuery) . Design and implement modern ELT/ETL architectures using Dataform , Dataplex , and other GCP-native services. Provide strategic consulting to clients on data platform modernization, governance, and data quality frameworks. Collaborate with cross-functional teams including data scientists, analysts, and business stakeholders. Define and enforce data engineering best practices , coding standards, and CI/CD processes. Mentor and manage a team of data engineers; foster a high-performance, collaborative team culture. Monitor project progress, ensure delivery timelines, and manage client expectations. Engage in technical pre-sales and solutioning , driving excellence in consulting delivery. Technical Skills & Tools: Cloud Platforms: Strong experience with Google Cloud Platform (GCP) – particularly BigQuery , Dataform , Dataplex , Cloud Composer , Cloud Storage , Pub/Sub . ETL/ELT Tools: Apache Airflow, Dataform, dbt (if applicable). Languages: Python, SQL, Shell scripting. Data Warehousing: BigQuery, Snowflake (optional), traditional DWs (e.g., Teradata, Oracle). DevOps: Git, CI/CD pipelines, Docker. Data Modeling: Dimensional modeling, Data Vault, star/snowflake schemas. Data Governance & Lineage: Dataplex, Collibra, or equivalent tools. Monitoring & Logging: Stackdriver, DataDog, or similar. Preferred Qualifications: Proven consulting experience with premium clients or Tier 1 consulting firms. Hands-on experience leading large-scale cloud migration projects . GCP Certification(s) (e.g., Professional Data Engineer, Cloud Architect). Strong client communication, stakeholder management, and leadership skills. Experience with agile methodologies and project management tools like JIRA. Show more Show less
Posted 1 week ago
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About the Role We’re looking for top-tier AI/ML Engineers with 6+ years of experience to join our fast-paced and innovative team. If you thrive at the intersection of GenAI, Machine Learning, MLOps, and application development, we want to hear from you. You’ll have the opportunity to work on high-impact GenAI applications and build scalable systems that solve real business problems. Key Responsibilities Design, develop, and deploy GenAI applications using techniques like RAG (Retrieval Augmented Generation), prompt engineering, model evaluation, and LLM integration. Architect and build production-grade Python applications using frameworks such as FastAPI or Flask . Implement gRPC services , event-driven systems ( Kafka, PubSub ), and CI/CD pipelines for scalable deployment. Collaborate with cross-functional teams to frame business problems as ML use-cases — regression, classification, ranking, forecasting, and anomaly detection. Own end-to-end ML pipeline development : data preprocessing, feature engineering, model training/inference, deployment, and monitoring. Work with tools such as Airflow , Dagster , SageMaker , and MLflow to operationalize and orchestrate pipelines. Ensure model evaluation , A/B testing , and hyperparameter tuning is done rigorously for production systems. Must-Have Skills Hands-on experience with GenAI/LLM-based applications – RAG, Evals, vector stores, embeddings. Strong backend engineering using Python , FastAPI/Flask , gRPC, and event-driven architectures. Experience with CI/CD , infrastructure, containerization, and cloud deployment (AWS, GCP, or Azure). Proficient in ML best practices : feature selection, hyperparameter tuning, A/B testing, model explainability. Proven experience in batch data pipelines and training/inference orchestration . Familiarity with tools like Airflow/Dagster , SageMaker , and data pipeline architecture . Show more Show less
Posted 1 week ago
4.0 - 8.0 years
3 - 13 Lacs
Pune, Maharashtra, India
On-site
What Your Responsibilities Will Be You will Design, develop, and maintain efficient ETL pipelines using DBT,Airflow to move and transform data from multiple sources into a data warehouse. You will Lead the development and optimization of data models (e.g., star, snowflake schemas) and data structures to support reporting. You will Leverage cloud platforms (e.g., AWS, Azure, Google Cloud) to manage and scale data storage, processing, and transformation processes. You will Work with business teams, marketing, and sales departments to understand data requirements and translate them into actionable insights and efficient data structures. You will Use advanced SQL and Python skills to query, manipulate, and transform data for multiple use cases and reporting needs. You will Implement data quality checks and ensure that the data adheres to governance best practices, maintaining consistency and integrity across datasets. You will Experience using Git for version control and collaborating on data engineering projects. What Youll Need to be Successful Bachelors degree with 6+ years of experience in Data Engineering. ETL/ELT Expertise : experience in building, improving ETL/ELT processes. Data Modeling : experience with designing and implementing data models such as star and snowflake schemas, and working with denormalized tables to optimize reporting performance. Experience with cloud-based data platforms (AWS, Azure, Google Cloud) SQL and Python Proficiency : Advanced SQL skills for querying large datasets and Python for automation, data processing, and integration tasks. DBT Experience : Hands-on experience with DBT (Data Build Tool) for transforming and managing data models. Good to have Skills: Familiarity with AI concepts such as machine learning (ML), (NLP), and generative AI. Work with AI-driven tools and models for data analysis, reporting, and automation. Oversee and implement DBT models to improve the data transformation process. Experience in the marketing and sales domain, with lead management, marketing analytics, and sales data integration. Familiarity with business intelligence reporting tools, Power BI, for building data models and generating insights.
Posted 1 week ago
2.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Profile: Sr. DW BI Developer Location: Sector 64, Noida (Work from Office) Position Overview: Working with the Finance Systems Manager, the role will ensure that ERP system is available and fit for purpose. The ERP Systems Developer will be developing the ERP system, providing comprehensive day-to-day support, training and develop the current ERP System for the future. Key Responsibilities: As a Sr. DW BI Developer, the candidate will participate in the design / development / customization and maintenance of software applications. As a DW BI Developer, the person should analyse the different applications/Products, design and implement DW using best practices. Rich data governance experience, data security, data quality, provenance / lineage. The candidate will also be maintaining a close working relationship with the other application stakeholders. Experience of developing secured and high-performance web application(s) Knowledge of software development life-cycle methodologies e.g. Iterative, Waterfall, Agile, etc. Designing and architecting future releases of the platform. Participating in troubleshooting application issues. Jointly working with other teams and partners handling different aspects of the platform creation. Tracking advancements in software development technologies and applying them judiciously in the solution roadmap. Ensuring all quality controls and processes are adhered to. Planning the major and minor releases of the solution. Ensuring robust configuration management. Working closely with the Engineering Manager on different aspects of product lifecycle management. Demonstrate the ability to independently work in a fast-paced environment requiring multitasking and efficient time management. Required Skills and Qualifications: End to end Lifecyle of Data warehousing, DataLakes and reporting Experience with Maintaining/Managing Data warehouses. Responsible for the design and development of a large, scaled-out, real-time, high performing Data Lake / Data Warehouse systems (including Big data and Cloud). Strong SQL and analytical skills. Experience in Power BI, Tableau, Qlikview, Qliksense etc. Experience in Microsoft Azure Services. Experience in developing and supporting ADF pipelines. Experience in Azure SQL Server/ Databricks / Azure Analysis Services Experience in developing tabular model. Experience in working with APIs. Minimum 2 years of experience in a similar role Experience with data warehousing, data modelling. Strong experience in SQL 2-6 years of total experience in building DW/BI systems Experience with ETL and working with large-scale datasets. Proficiency in writing and debugging complex SQLs. Prior experience working with global clients. Hands on experience with Kafka, Flink, Spark, SnowFlake, Airflow, nifi, Oozie, Pig, Hive,Impala Sqoop. Storage like HDFS , Object Storage (S3 etc), RDBMS, MPP and Nosql DB. Experience with distributed data management, data sfailover,luding databases (Relational, NoSQL, Big data, data analysis, data processing, data transformation, high availability, and scalability) Experience in end-to-end project implementation in Cloud (Azure / AWS / GCP) as a DW BI Developer Rich data governance experience, data security, data quality, provenance / lineagHive, Impalaerstanding of industry trends and products in dataops , continuous intelligence , Augmented analytics , and AI/ML. Prior experience of working in cloud like Azure, AWS and GCP Prior experience of working with Global Clients To know our Privacy Policy, please click on the link below or copy paste the URL on your browser: https://gedu.global/wp-content/uploads/2023/09/GEDU-Privacy-Policy-22092023-V2.0-1.pdf Show more Show less
Posted 1 week ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Client: Our Client is a global IT services company headquartered in Southborough, Massachusetts, USA. Founded in 1996, with a revenue of $1.8B, with 35,000+ associates worldwide, specializes in digital engineering, and IT services company helping clients modernize their technology infrastructure, adopt cloud and AI solutions, and accelerate innovation. It partners with major firms in banking, healthcare, telecom, and media. Our Client is known for combining deep industry expertise with agile development practices, enabling scalable and cost-effective digital transformation. The company operates in over 50 locations across more than 25 countries, has delivery centers in Asia, Europe, and North America and is backed by Baring Private Equity Asia. Job Title :Data Engineer Key Skills :Python , ETL, Snowflake, Apache Airflow Job Locations : Pan India. Experience : 6-8 Education Qualification : Any Graduation. Work Mode : Hybrid. Employment Type : Contract. Notice Period : Immediate Job Description: 6 to 10 years of experience in data engineering roles with a focus on building scalable data solutions. Proficiency in Python for ETL, data manipulation, and scripting. Hands-on experience with Snowflake or equivalent cloud-based data warehouses. Strong knowledge of orchestration tools such as Apache Airflow or similar. Expertise in implementing and managing messaging queues like Kafka , AWS SQS , or similar. Demonstrated ability to build and optimize data pipelines at scale, processing terabytes of data. Experience in data modeling, data warehousing, and database design. Proficiency in working with cloud platforms like AWS, Azure, or GCP. Strong understanding of CI/CD pipelines for data engineering workflows. Experience working in an Agile development environment , collaborating with cross-functional teams. Preferred Skills: Familiarity with other programming languages like Scala or Java for data engineering tasks. Knowledge of containerization and orchestration technologies (Docker, Kubernetes). Experience with stream processing frameworks like Apache Flink . Experience with Apache Iceberg for data lake optimization and management. Exposure to machine learning workflows and integration with data pipelines. Soft Skills: Strong problem-solving skills with a passion for solving complex data challenges. Excellent communication and collaboration skills to work with cross-functional teams. Ability to thrive in a fast-paced, innovative environment. Show more Show less
Posted 1 week ago
6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role Description We’re looking for a RevOps Prog manager—a sharp operator who brings structure to marketing, sales, and CS data chaos, translating numbers into stories and action. You’ll build and own dashboards, deliver actionable monthly/quarterly reporting, and partner with leaders across Marketing, Sales, and CS. You won’t just “track” performance; you’ll dig deep, ask the uncomfortable questions, and flag issues before they become fires. Key Responsibilities Marketing & Demand Ops: Generate and analyze inflow volume reports segmented by source/channel, highlight trends MoM/QoQ/YoY. Build and maintain dashboards for leading/lagging indicators. Evaluate post-campaign performance with minimal prompting. Proactively flag inflow anomalies and forecast potential SQL (Sales Qualified Lead) gaps. Partner with Marketing and BDRs to surface and communicate insights. Sales Funnel & Insights: Deliver full-funnel reporting: Deal creation to → Closed Won/Lost. Cohort analysis (by industry, deal source, etc). Benchmark and present on qualification accuracy and pipeline drop-offs. Regularly brief Sales leadership on rep performance, deal health, and pipeline risks. CS Ops & Retention: Track implementation progress, flag delays, and report on project timelines. Maintain accurate CRM data for renewals, ownership, and deal values. Identify and validate churn signals by deal type/industry, etc. Provide “closed loop” feedback to Sales on deal quality, segment health, and misfit accounts. Own reporting on NRR, retention, and account health by segment. Skills & Experience Required 3–6 years’ experience in RevOps, Sales/Marketing/CS Ops, Analytics, or adjacent roles. Strong technical chops: Deep experience with reporting, data analytics, and dashboarding tools (e.g., Tableau, Power BI, Looker, or similar).Hands-on with CRM (Salesforce, HubSpot, or equivalent). Proficient with SQL; experience with data pipelines (ETL, dbt, Airflow, etc.) a plus. Comfortable wrangling large datasets—can audit, clean, and synthesize data without breaking a sweat. Business Acumen: Understands the GTM funnel and can spot issues/opportunities in complex data sets. Knows how to balance “speed vs. depth”—can hustle when timelines are tight, but knows where to dig deep. Soft Skills: A natural hustler: proactive, unafraid to chase answers, and push for clarity. Comfortable working in a flat hierarchy—collaborates across teams, not afraid to debate or question leadership. Juggles multiple projects; thrives (not just survives) in ambiguity. Excellent communicator—can turn complex findings into clear, actionable insights for both technical and non-technical audiences. Bias for action—brings solutions, not just problems. Preferred Qualifications Experience building/maintaining data pipelines, or supporting large-scale analytics projects. Prior experience in B2B SaaS, high-growth startups, or similar fast-paced environments. Exposure to modern marketing/sales/CS tech stacks (think HubSpot, Marketo, Outreach, Gainsight, etc.). Proven track record of influencing strategy with data-driven insights. Why Join Us? You’ll be in the room where decisions are made, not just reporting on them. If you want ownership, high visibility, and the opportunity to move the needle, you’ll fit right in. If you need every problem pre-defined and neatly packaged, you probably won’t. Show more Show less
Posted 1 week ago
7.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
We are seeking a highly skilled and experienced Lead Data Engineer (7+ years) to join our dynamic team. As a Lead Data Engineer, you will play a crucial role in designing, developing, and maintaining our data infrastructure. You will be responsible for ensuring the efficient and reliable collection, storage, and transformation of large-scale data to support business intelligence, analytics, and data-driven decision-making. Key Responsibilities Data Architecture & Design : Lead the design and implementation of robust data architectures that support data warehousing (DWH), data integration, and analytics platforms. Develop and maintain ETL (Extract, Transform, Load) pipelines to ensure the efficient processing of large datasets. ETL Development Design, develop, and optimize ETL processes using tools like Informatica Power Center, Intelligent Data Management Cloud (IDMC), or custom Python scripts. Implement data transformation and cleansing processes to ensure data quality and consistency across the enterprise. Data Warehouse Development Build and maintain scalable data warehouse solutions using Snowflake, Databricks, Redshift, or similar technologies. Ensure efficient storage, retrieval, and processing of structured and semi-structured data. Big Data & Cloud Technologies Utilize AWS Glue and PySpark for large-scale data processing and transformation. Implement and manage data pipelines using Apache Airflow for orchestration and scheduling. Leverage cloud platforms (AWS, Azure, GCP) for data storage, processing, and analytics. Data Management & Governance Establish and enforce data governance and security best practices. Ensure data integrity, accuracy, and availability across all data platforms. Implement monitoring and alerting systems to ensure data pipeline reliability. Collaboration & Leadership Work closely with data Stewards, analysts, and business stakeholders to understand data requirements and deliver solutions that meet business needs. Mentor and guide junior data engineers, fostering a culture of continuous learning and development within the team. Lead data-related projects from inception to delivery, ensuring alignment with business objectives and timelines. Database Management Design and manage relational databases (RDBMS) to support transactional and analytical workloads. Optimize SQL queries for performance and scalability across various database platforms. Required Skills & Qualifications Education: Bachelors or Masters degree in Computer Science, Information Systems, Engineering, or a related field. Experience Minimum of 7+ years of experience in data engineering, ETL, and data warehouse development. Proven experience with ETL tools like Informatica Power Center or IDMC. Strong proficiency in Python and PySpark for data processing. Experience with cloud-based data platforms such as AWS Glue, Snowflake, Databricks, or Redshift. Hands-on experience with SQL and RDBMS platforms (e.g., Oracle, MySQL, PostgreSQL). Familiarity with data orchestration tools like Apache Airflow. Technical Skills Advanced knowledge of data warehousing concepts and best practices. Strong understanding of data modeling, schema design, and data governance. Proficiency in designing and implementing scalable ETL pipelines. Experience with cloud infrastructure (AWS, Azure, GCP) for data storage and processing. Soft Skills Excellent communication and collaboration skills. Ability to lead and mentor a team of engineers. Strong problem-solving and analytical thinking abilities. Ability to manage multiple projects and prioritize tasks effectively. Preferred Qualifications Experience with machine learning workflows and data science tools. Certification in AWS, Snowflake, Databricks, or relevant data engineering technologies. Experience with Agile methodologies and DevOps practices. (ref:hirist.tech) Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The airflow job market in India is rapidly growing as more companies are adopting data pipelines and workflow automation. Airflow, an open-source platform, is widely used for orchestrating complex computational workflows and data processing pipelines. Job seekers with expertise in airflow can find lucrative opportunities in various industries such as technology, e-commerce, finance, and more.
The average salary range for airflow professionals in India varies based on experience levels: - Entry-level: INR 6-8 lakhs per annum - Mid-level: INR 10-15 lakhs per annum - Experienced: INR 18-25 lakhs per annum
In the field of airflow, a typical career path may progress as follows: - Junior Airflow Developer - Airflow Developer - Senior Airflow Developer - Airflow Tech Lead
In addition to airflow expertise, professionals in this field are often expected to have or develop skills in: - Python programming - ETL concepts - Database management (SQL) - Cloud platforms (AWS, GCP) - Data warehousing
As you explore job opportunities in the airflow domain in India, remember to showcase your expertise, skills, and experience confidently during interviews. Prepare well, stay updated with the latest trends in airflow, and demonstrate your problem-solving abilities to stand out in the competitive job market. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.