Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Senior Data Scientist Who is Mastercard? Mastercard is a global technology company in the payments industry. Our mission is to connect and power an inclusive, digital economy that benefits everyone, everywhere by making transactions safe, simple, smart, and accessible. Using secure data and networks, partnerships, and passion, our innovations and solutions help individuals, financial institutions, governments, and businesses realize their greatest potential. Our decency quotient, or DQ, drives our culture and everything we do inside and outside of our company. With connections across more than 210 countries and territories, we are building a sustainable world that unlocks priceless possibilities for all. Our Team As consumer preference for digital payments continues to grow, ensuring a seamless and secure consumer experience is top of mind. Optimization Solutions team focuses on tracking of digital performance across all products and regions, understanding the factors influencing performance and the broader industry landscape. This includes delivering data-driven insights and business recommendations, engaging directly with key external stakeholders on implementing optimization solutions (new and existing), and partnering across the organization to drive alignment and ensure action is taken. Are you excited about Data Assets and the value they bring to an organization? Are you an evangelist for data-driven decision-making? Are you motivated to be part of a team that builds large-scale Analytical Capabilities supporting end users across 6 continents? Do you want to be the go-to resource for data science & analytics in the company? The Role Work closely with global optimization solutions team to architect, develop, and maintain advanced reporting and data visualization capabilities on large volumes of data to support data insights and analytical needs across products, markets, and services The candidate for this position will focus on Building solutions using Machine Learning and creating actionable insights to support product optimization and sales enablement. Prototype new algorithms, experiment, evaluate and deliver actionable insights. Drive the evolution of products with an impact focused on data science and engineering. Designing machine learning systems and self-running artificial intelligence (AI) software to automate predictive models. Perform data ingestion, aggregation, and processing on high volume and high dimensionality data to drive and enable data unification and produce relevant insights. Continuously innovate and determine new approaches, tools, techniques & technologies to solve business problems and generate business insights & recommendations. Apply knowledge of metrics, measurements, and benchmarking to complex and demanding solutions. All About You A superior academic record at a leading university in Computer Science, Data Science, Technology, mathematics, statistics, or a related field or equivalent work experience Experience in data management, data mining, data analytics, data reporting, data product development and quantitative analysis Strong analytical skills with track record of translating data into compelling insights Prior experience working in a product development role. knowledge of ML frameworks, libraries, data structures, data modeling, and software architecture. proficiency in using Python/Spark, Hadoop platforms & tools (Hive, Impala, Airflow, NiFi), and SQL to build Big Data products & platforms Experience with Enterprise Business Intelligence Platform/Data platform i.e. Tableau, PowerBI is a plus. Demonstrated success interacting with stakeholders to understand technical needs and ensuring analyses and solutions meet their needs effectively. Ability to build a strong narrative on the business value of products and actively participate in sales enablement efforts. Able to work in a fast-paced, deadline-driven environment as part of a team and as an individual contributor. Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-250486 Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Gurgaon, Haryana, India
Remote
About This Role What is Aladdin Studio? When BlackRock was started in 1988, its founders envisioned a company that combined the best of financial services with cutting edge technology. They imagined a company that would provide financial services to clients as well as technology services to other financial firms. The result of their vision is Aladdin, our industry leading, end-to-end investment management platform. Aladdin Studio ("Studio") is an integrated platform to discover data, build financial applications, and connect Aladdin to other applications. Studio enables and empowers development for the entire Aladdin community - from citizen developers to full-time engineers. It includes Aladdin’s API surface, cloud investment data platform – the Aladdin Data Cloud, and hosted developer environment – Aladdin Compute. Team Overview Data is at the core of the Aladdin platform, and increasingly, our ability to consume, store, analyze, and gain insight from data is a key component of our competitive advantage. As part of Aladdin Studio, The Aladdin Data Cloud (ADC) Engineering team is responsible for building and maintaining data-as-a-service solution for all the data management and transformation needs. We engineer high performance data pipelines, provide a fabric to discover and consume data, and continually evolve our data surface capabilities. As a Data engineer in the ADC Engineering team, you will: - Work alongside our engineers to help design and build scalable data pipelines while evolving the data surface. Help prove out and productionize Cloud Native Infrastructure and tooling to support scalable data cloud. Have fun as part of an awesome team. Specific Responsibilities It's a mix of backend application engineering (Python backend) including data engineering to build solution leveraging existing Data Framework Collaborating in a multi-disciplinary squad involving program and product managers, data scientists, and client professionals to expand the product offering based on business impact and demand Be involved from inception of projects, understanding requirements, designing & developing solutions, and incorporating them into the designs of our platforms Maintain excellent knowledge of the technical landscape for data & cloud tooling Assist in troubleshooting issues, support the operation of production software Write technical documentation Required Skills 4+ years of industry experience in data engineering area. Passion for engineering and optimizing data sets, data pipelines and architecture. Ability to build processes that support data transformation, workload management, data structures, lineage, and metadata. Knowledge of SQL and performance tuning. Experience with Snowflake is preferred. Good working knowledge languages such as Python/Java Understanding of software deployment and orchestration technologies such as airflow etc. Experience in creating and evolving CI/CD pipelines with Gitlab or Azure Data Ops. Our Benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about. Our hybrid work model BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. About BlackRock At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress. This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com/company/blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law. Show more Show less
Posted 1 week ago
9.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description The Oracle Cloud Infrastructure (OCI) team offers a unique opportunity to design, build, and operate a comprehensive suite of large-scale, integrated cloud services within a broadly distributed, multi-tenant cloud environment. With a commitment to delivering exceptional cloud products, OCI empowers customers to tackle some of the world's most pressing challenges, providing tailored solutions that meet their evolving needs. Are you passionate about designing and building large-scale distributed monitoring and analytics solutions for the cloud? Do you thrive in environments that combine the agility and innovation of a startup with the resources and stability of a Fortune 100 company? As a member of our fast-growing team, you'll enjoy a high degree of autonomy, diverse challenges, and unparalleled opportunities for growth. This role offers substantial upside potential, high visibility, and accelerated career advancement. Join our team of talented individuals and tackle complex problems in distributed systems, data processing, metrics collection, data analytics, network monitoring, and multi-tenant Infrastructure-as-a-Service (IaaS) at massive scale, driving innovation and excellence in the cloud. We are seeking an experienced Principal Engineer to design and develop software, including automated test suites, for major components in our Network Monitoring & Analytics Stack. As a member of our team, you will have the opportunity to build large-scale distributed monitoring and analytics solutions for the cloud, working with a talented group of engineers to solve complex problems in distributed systems, data processing, and network monitoring. Do you thrive in a fast-paced environment, and want to be an integral part of a truly great team? Come join us! Required Qualifications: 9+ years of experience in software development 3+ years of experience in developing large scale distributed services/applications Proficiency with Java/Python/C++/Go and Object-Oriented programming Excellent knowledge of data structures, search/sort algorithms Excellent organizational, verbal, and written communication skills Bachelors degree in Computer Science Desired Qualifications: Knowledge of cloud computing & networking technologies including monitoring services Networking Management Technologies such as SNMP, gNMI, protobuf, YANG Models etc Networking Technologies such as L2/L3, TCP/IP, sockets, BGP, OSPF, LLDP, ICMP etc Experience developing service-oriented systems Exposure to Kafka, Prometheus, Spark, Airflow, Flink or other open-source distributed data streaming platforms and databases Experience developing automated test suites Experience with Jira, Confluence, BitBucket Knowledge of Scrum & Agile Methodologies Responsibilities Design and develop software for major components in our Network Monitoring & Analytics Stack Build complex distributed systems involving large amounts of data handling, including collecting metrics, building data pipelines, and analytics for real-time processing, online processing, and batch processing Develop automated test suites to ensure high-quality solutions Collaborate with cross-functional teams to deliver cloud services that meet customer needs Participate in an agile environment, contributing to the development of innovative new systems to power business-critical applications Qualifications Career Level - IC4 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less
Posted 1 week ago
40.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
Who We Are Escalent is an award-winning data analytics and advisory firm that helps clients understand human and market behaviors to navigate disruption. As catalysts of progress for more than 40 years, our strategies guide the world’s leading brands. We accelerate growth by creating a seamless flow between primary, secondary, syndicated, and internal business data, providing consulting and advisory services from insights through implementation. Based on a profound understanding of what drives human beings and markets, we identify actions that build brands, enhance customer experiences, inspire product innovation and boost business productivity. We listen, learn, question, discover, innovate, and deliver—for each other and our clients—to make the world work better for people. Why Escalent? Once you join our team you will have the opportunity to... Access experts across industries for maximum learning opportunities including Weekly Knowledge Sharing Sessions, LinkedIn Learning, and more. Gain exposure to a rich variety of research techniques from knowledgeable professionals. Enjoy a remote first/hybrid work environment with a flexible schedule. Obtain insights into the needs and challenges of your clients—to learn how the world’s leading brands use research. Experience peace of mind working for a company with a commitment to conducting research ethically. Build lasting relationships with fun colleagues in a culture that values each person. Role Overview We are looking for a Data Engineer to design, build, and optimize scalable data pipelines and infrastructure that power analytics, machine learning, and business intelligence. You will work closely with data scientists, analysts, and software engineers to ensure efficient data ingestion, transformation, and management. Roles & Responsibilities Design, develop, and maintain scalable ETL/ELT pipelines to extract, transform, and load data from diverse sources Build and optimize data storage solutions using SQL and NoSQL databases, data lakes, and cloud warehouses (Snowflake, BigQuery, Redshift) Ensure data quality, integrity, and security through automated validation, governance, and monitoring frameworks Collaborate with data scientists and analysts to provide clean, structured, and accessible data for reporting and AI/ML models Implement best practices for performance tuning, indexing, and query optimization to handle large-scale datasets Stay updated with emerging data engineering technologies, architectures, and industry best practices Write clean and structured code as defined in the team’s coding standards and creating documentation for best practices Stay updated with emerging technologies, frameworks, and industry trends Required Skills Minimum 6 years of experience in Python, SQL, and data processing frameworks (Pandas, Spark, Hadoop) Experience with cloud-based data platforms (AWS, Azure, GCP) and services like S3, Glue, Athena, Data Factory, or BigQuery Solid understanding of database design, data modeling and warehouse architectures Hands-on experience with ETL/ELT pipelines and workflow orchestration tools (Apache Airflow, Prefect, Luigi) Knowledge of APIs, RESTful services and integrating multiple data sources Strong problem-solving and debugging skills in handling large-scale data processing challenges Experience with version control systems (Git, GitHub, GitLab) Ability to work in a team setting Organizational and time management skills Desirable Skills Experience working with Agile development methodologies Experience in building self-service data platforms for business users and analysts Effective skills in written and verbal communication Show more Show less
Posted 1 week ago
4.0 - 6.0 years
0 Lacs
India
On-site
The proliferation of machine log data has the potential to give organizations unprecedented real-time visibility into their infrastructure and operations. With this opportunity comes tremendous technical challenges around ingesting, managing, and understanding high-volume streams of heterogeneous data. As a Machine Learning Engineer at Sumo Logic, you will actively contribute in the design and development of innovative ML-powered product capabilities to help our customers make sense of their huge amounts of log data. This involves working through the entire feature lifecycle including ideation, dataset construction, experimental validation, prototyping, production implementation, deployment, and operations. Responsibilities Identifying and validating opportunities for the application of ML or data-driven techniques Assessing requirements and approaches for large-scale data and ML platform components Driving technical delivery through the full feature lifecycle, from idea to production and operations Helping the team design and implement extremely high-volume, fault-tolerant, scalable backend systems that process and manage petabytes of customer data. Collaborating within and beyond the team to identify problems and deliver solutions Work as a member of a team, helping the team respond quickly and effectively to business needs. Requirements B.Tech, M.Tech, or Ph.D. in Computer Science or related discipline 4-6 years of industry experience with a proven track record of ownership and delivery Experience formulating use cases as ML problems and putting ML models into production Solid grounding in core ML concepts and basic statistics Experience with software engineering of production-grade services in cloud environments handling data at large scale Desirable Cloud-based application and infrastructure deployment and management Common ML libraries (eg, scikit-learn, PyTorch) and components (eg, Airflow, MLFlow) Relevant cloud provider services (eg, AWS Sagemaker) LLM core concepts, libraries, and application design patterns Experience in multi-threaded programming and distributed systems Agile software development experience (test-driven development, iterative and incremental development) is a plus. About Us Sumo Logic, Inc., empowers the people who power modern, digital business. Sumo Logic enables customers to deliver reliable and secure cloud-native applications through its SaaS analytics platform. The Sumo Logic Continuous Intelligence Platform™ helps practitioners and developers ensure application reliability, secure and protect against modern security threats, and gain insights into their cloud infrastructures. Customers worldwide rely on Sumo Logic to get powerful real-time analytics and insights across observability and security solutions for their cloud-native applications. For more information, visit www.sumologic.com. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Description - External Company Overview: Schneider Electric is a global leader in energy management and automation, committed to providing innovative solutions that ensure Life Is On everywhere, for everyone, and at every moment. We are expanding our team in Gurugram and looking for a Senior Engineer Devops to enhance our cloud capabilities and drive the integration of digital technologies in our operations. Job Description: We are seeking a highly skilled and experienced Senior Design Engineer – DevOps to lead the design, implementation, and optimization of our cloud infrastructure and CI/CD pipelines. The ideal candidate will have deep expertise in Microsoft Azure and working knowledge of AWS , with a strong background in infrastructure as code, automation, and cloud-first architecture. It will collaborate with cross-functional teams to deliver seamless integration and operation of our systems and services. Key Responsibilities: Technical: Azure Cloud Infrastructure: Design, implement, and maintain Azure-based infrastructure, leveraging services like Azure Kubernetes Service (AKS), Azure Functions, Virtual Machines, and App Services. AWS Support: good understanding of AWS environment, to provide support in AWS to Azure migration. CI/CD Pipelines: Develop and manage continuous integration and delivery pipelines using Azure DevOps and GitHub Actions. Infrastructure as Code (IaC): Automate provisioning and configuration using tools like Terraform, Azure Resource Manager (ARM) templates, or Bicep. Monitoring & Observability: Implement robust monitoring and logging solutions with Azure Monitor, Application Insights, and integrate with tools like Prometheus or Grafana. Security & Compliance: Enforce best practices in cloud security, implement Azure Security Center recommendations, and manage compliance standards (e.g., SOC 2, GDPR).· Apply working knowledge of security concepts including VPN, firewall, and IPtables in daily tasks. Automation : Automate workflows, infrastructure scaling, and backups using different Automation and scripting tools. Collaboration: Work closely with developers, QA teams, and IT operations to streamline processes and improve deployment reliability.· Troubleshooting: Investigate and resolve issues related to infrastructure, performance, and application integration. Documentation: Maintain detailed documentation for Azure and AWS configurations, workflows, and processes. Requirements: 5+ years of experience in DevOps, with 1+ years of experience in handling secops In-depth knowledge of Azure services like AKS, Azure Functions, Azure DevOps, Azure AD, and Networking. Experience with core AWS services such as EC2, S3, RDS, and Lambda and with different open-source tools Proficiency in Docker and Kubernetes. Hands-on experience with Terraform, ARM templates, or Bicep for Azure. Programming/Scripting: Strong skills in scripting with PowerShell, Python, or Bash. Expertise in Azure DevOps, GitHub Actions, or Jenkins. Familiarity with Azure Monitor, Log Analytics, Application Insights, and tools like Grafana or ELK. Strong knowledge of Azure Security Center, Key Vault, and IAM. Proficiency in Git and Git-based workflows. Good to have: Experience with Kafka for message streaming. Familiarity with Druid for data-intensive applications. Knowledge of Apache Airflow for workflow management. Experience in managing cloud infra on Azure/AWS for a large customer. Experience in migrating the cloud environments from one vendor to another (AWS to Azure or vice versa) or across regions within a single vendor. · Soft Skills: Stay updated on the latest Azure and AWS developments, recommending new technologies and practices to improve operations. Excellent problem-solving abilities and strong communication skills. Advanced verbal and written communication skills including the ability to explain and present technical concepts to a diverse set of audiences. Good judgment, time management, and decision-making skills Strong teamwork and interpersonal skills; ability to communicate and thrive in a cross-functional environment. Willingness to work outside documented job description. Has a “whatever is needed” attitude. Qualifications - External Preferred Qualifications: Bachelor’s or master’s degree in computer science, Information Technology, or related field. Microsoft Certified: Azure Solutions Architect, Azure DevOps Engineer Expert, or equivalent certifications. AWS Certification (e.g., Solutions Architect Associate). Experience with hybrid cloud and multi-cloud architectures. Knowledge of microservices and service mesh solutions Familiarity with database optimization on Azure SQL or Cosmos DB. Prior experience in the energy sector or industrial automation is advantageous. Show more Show less
Posted 1 week ago
6.0 - 9.0 years
0 - 3 Lacs
Pune, Chennai, Bengaluru
Hybrid
Preferred candidate profile GCP Cloud Data Engineers having strong experience in cloud migrations and pipelines with 5+ years of experience • Good understanding of Database and Data Engineering concepts. • Experience in cloud migration is must • Experience in Data Ingestion and processing from difference resources • Conceptual knowledge of understanding data, building ETL pipelines, data integrations, and ODS/DW. • Hands-on experience in SQL and Python • Experience in java development is required • Hands-on working experience in Google Cloud Platform Data Flow, Data Transfer services, AirFlow • Hands-on Working experience in Data Preprocessing techniques using DataFlow, DataProc, DataPrep • Hands-on working experience in BigQuery • Knowledge in Kafka, PubSub, GCS & Schedulers are required • Proficient with PostgreSQL is preferred • Experience with both real time and scheduled pipelines are preferred • Cloud certification is a plus • Experience in implementing ETL pipelines • Familiar with MicroServices or Enterprise Application Integration Patterns is a plus"
Posted 1 week ago
5.0 - 7.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Title: Assistant Manager - Data Engineer Location: Andheri (Mumbai) Job Type: Full-Time Department: IT Position Overview: The Assistant Manager - Data Engineer will play a pivotal role in the design, development, and maintenance of data pipelines that ensure the efficiency, scalability, and reliability of our data infrastructure. This role will involve optimizing and automating ETL/ELT processes, as well as developing and refining databases, data warehouses, and data lakes. As an Assistant Manager, you will also mentor junior engineers and collaborate closely with cross-functional teams to support business goals and drive data excellence. Key Responsibilities: Data Pipeline Development: Design, build, and maintain efficient, scalable, and reliable data pipelines to support data analytics, reporting, and business intelligence initiatives. Database and Data Warehouse Management: Develop, optimize, and manage databases, data warehouses, and data lakes to enhance data accessibility and business decision-making. ETL/ELT Optimization: Automate and optimize data extraction, transformation, and loading (ETL/ELT) processes, ensuring efficient data flow and improved system performance. Data Modeling & Architecture: Develop and maintain data models to enable structured data storage, analysis, and reporting in alignment with business needs. Workflow Management Systems: Implement, optimize, and maintain workflow management tools (e.g., Apache Airflow, Talend) to streamline data engineering tasks and improve operational efficiency. Team Leadership & Mentorship: Guide, mentor, and support junior data engineers to enhance their skills and contribute effectively to projects. Collaboration with Cross-Functional Teams: Work closely with data scientists, analysts, business stakeholders, and IT teams to understand requirements and deliver solutions that align with business objectives. Performance Optimization: Continuously monitor and optimize data pipelines and storage solutions to ensure maximum performance and cost efficiency. Documentation & Process Improvement: Create and maintain documentation for data models, workflows, and systems. Contribute to the continuous improvement of data engineering practices. Qualifications: Educational Background: B.E., B.Tech., MCA Professional Experience: At least 5 to 7 years of experience in a data engineering or similar role, with hands-on experience in building and optimizing data pipelines, ETL processes, and database management. Technical Skills: Proficiency in Python and SQL for data processing, transformation, and querying. Experience with modern data warehousing solutions (e.g., Amazon Redshift, Snowflake, Google BigQuery, Azure Data Lake). Strong background in data modeling (dimensional, relational, star/snowflake schema). Hands-on experience with ETL tools (e.g., Apache Airflow, Talend, Informatica) and workflow management systems . Familiarity with cloud platforms (AWS, Azure, Google Cloud) and distributed data processing frameworks (e.g., Apache Spark). Data Visualization & Exploration: Familiarity with data visualization tools (e.g., Tableau, Power BI) for analysis and reporting. Leadership Skills: Demonstrated ability to manage and mentor a team of junior data engineers while fostering a collaborative and innovative work environment. Problem-Solving & Analytical Skills: Strong analytical and troubleshooting skills with the ability to optimize complex data systems for performance and scalability. Experience in Pharma/Healthcare (preferred but not required): Knowledge of the pharmaceutical industry and experience with data in regulated environments Desired Skills: Familiarity with industry-specific data standards and regulations. Experience working with machine learning models or data science pipelines is a plus. Strong communication skills with the ability to present technical data to non-technical stakeholders. Why Join Us: Impactful Work: Contribute to the pharmaceutical industry by improving data-driven decisions that impact public health. Career Growth: Opportunities to develop professionally in a fast-growing industry and company. Collaborative Environment: Work with a dynamic and talented team of engineers, data scientists, and business stakeholders. Competitive Benefits: Competitive salary, health benefits and more. Show more Show less
Posted 1 week ago
5.0 - 7.0 years
18 - 20 Lacs
Hyderabad, Bengaluru
Hybrid
Type: Contract-to-Hire (C2H) Job Summary We are looking for a skilled PySpark Developer with MUST 4+ YEARS hands-on experience in building scalable data pipelines and processing large datasets. The ideal candidate will have deep expertise in Apache Spark, Python, and working with modern data engineering tools in cloud environments such as AWS. Key Skills & Responsibilities Strong expertise in PySpark and Apache Spark for batch and real-time data processing. Experience in designing and implementing ETL pipelines, including data ingestion, transformation, and validation. Proficiency in Python for scripting, automation, and building reusable components. Hands-on experience with scheduling tools like Airflow or Control-M to orchestrate workflows. Familiarity with AWS ecosystem, especially S3 and related file system operations. Strong understanding of Unix/Linux environments and Shell scripting. Experience with Hadoop, Hive, and platforms like Cloudera or Hortonworks. Ability to handle CDC (Change Data Capture) operations on large datasets. Experience in performance tuning, optimizing Spark jobs, and troubleshooting. Strong knowledge of data modeling, data validation, and writing unit test cases. Exposure to real-time and batch integration with downstream/upstream systems. Working knowledge of Jupyter Notebook, Zeppelin, or PyCharm for development and debugging. Understanding of Agile methodologies, with experience in CI/CD tools (e.g., Jenkins, Git). Preferred Skills Experience in building or integrating APIs for data provisioning. Exposure to ETL or reporting tools such as Informatica, Tableau, Jasper, or QlikView. Familiarity with AI/ML model development using PySpark in cloud environments.
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About PhonePe Group: PhonePe is India’s leading digital payments company with 50 crore (500 Million) registered users and 3.7 crore (37 Million) merchants covering over 99% of the postal codes across India. On the back of its leadership in digital payments, PhonePe has expanded into financial services (Insurance, Mutual Funds, Stock Broking, and Lending) as well as adjacent tech-enabled businesses such as Pincode for hyperlocal shopping and Indus App Store which is India's first localized App Store. The PhonePe Group is a portfolio of businesses aligned with the company's vision to offer every Indian an equal opportunity to accelerate their progress by unlocking the flow of money and access to services. Culture At PhonePe, we take extra care to make sure you give your best at work, Everyday! And creating the right environment for you is just one of the things we do. We empower people and trust them to do the right thing. Here, you own your work from start to finish, right from day one. Being enthusiastic about tech is a big part of being at PhonePe. If you like building technology that impacts millions, ideating with some of the best minds in the country and executing on your dreams with purpose and speed, join us! Job Overview: As a Site Reliability Engineer (SRE) specializing in DataPlatform OnPremise, you will play a critical role in deployment, ensuring the reliability, scalability, and performance of our Cloudera Data Platform (CDP) infrastructure. You will collaborate closely with cross-functional teams to design, implement, and maintain robust systems that support our data-driven initiatives. The ideal candidate will have a deep understanding of Data Platform, strong troubleshooting skills, and a proactive mindset towards automation and optimization.You will play a pivotal role in ensuring the smooth functioning, operation, performance and security of large high density Cloudera-based infrastructure. Roles and Responsibilities: Work on tasks related to implementation of Cloudera Data Platform Cloudera Data Platform on-premises and be a part of planning, installation, configuration, and integration with existing systems. Infrastructure Management: Manage and maintain the Cloudera-based infrastructure, ensuring optimal performance, high availability, and scalability. This includes monitoring system health, and performing routine maintenance tasks. Strong troubleshooting skills and operational expertise in areas such as system capacity, bottlenecks, memory, CPU, OS, storage, and networking. Creating Runbooks and automating them using scripting tools like Shell scripting, Python etc. Working knowledge with any of the configuration management tools like Terraform, Ansible or SALT Data Security and Compliance: Implement and enforce security best practices to safeguard data integrity and confidentiality within the Cloudera environment. Ensure compliance with relevant regulations and standards (e.g., GDPR, HIPAA, DPR). Performance Optimization: Continuously optimize the Cloudera infrastructure to enhance performance, efficiency, and cost-effectiveness. Identify and resolve bottlenecks, tune configurations, and implement best practices for resource utilization. Capacity Planning: Planning and performance tuning of Hadoop clusters, Monitor resource utilization trends and plan for future capacity needs. Proactively identify potential capacity constraints and propose solutions to address them. Collaborate effectively with infrastructure, network, database, application, and business intelligence teams to ensure high data quality and availability. Work closely with teams to optimize the overall performance of the PhonePe Hadoop ecosystem. Backup and Disaster Recovery: Implement robust backup and disaster recovery strategies to ensure data protection and business continuity. Test and maintain backup and recovery procedures regularly. Develop tools and services to enhance debuggability and supportability. Patches & Upgrades: Routinely apply recommended patches and perform rolling upgrades of the platform in accordance with the advisory from Cloudera, InfoSec and Compliance. Documentation and Knowledge Sharing: Create comprehensive documentation for configurations, processes, and procedures related to the Cloudera Data Platform. Share knowledge and best practices with team members to foster continuous learning and improvement. Collaboration and Communication: Collaborate effectively with cross-functional teams including data engineers, developers, and IT operations personnel. Communicate project status, issues, and resolutions clearly and promptly. Skills Required: Bachelor's degree in Computer Science, Engineering, or related field. Proficiency in Linux system administration, shell scripting, and networking concepts including IPtables, and IPsec. Strong understanding of networking, open-source technologies, and tools. 3-5 years of experience in the design, set up, and management of large-scale Hadoop clusters, ensuring high availability, fault tolerance, and performance optimization. Strong understanding of distributed computing principles and experience with Hadoop ecosystem technologies (HDFS, MapReduce, YARN, Hive, Spark, etc.). Experience with Kerberos and LDAP. Strong Knowledge of databases like Mysql,Nosql,Sql server Hands-on experience with configuration management tools (e.g., Salt,Ansible, Puppet, Chef). Strong scripting skills (e.g., PERL,Python, Bash) for automation and troubleshooting. Experience with monitoring and logging solutions (e.g., Prometheus, Grafana, ELK stack). Knowledge of networking principles and protocols (TCP/IP, UDP, DNS, DHCP, etc.). Experience with managing *nix based machines and strong working knowledge of quintessential Unix programs and tools (e.g. Ubuntu, Fedora, Redhat, etc.) Excellent communication skills and the ability to collaborate effectively with cross-functional teams. Excellent analytical, problem-solving, and troubleshooting skills.. Proven ability to work well under pressure and manage multiple priorities simultaneously. Good To Have: Cloudera Certified Administrator (CCA) or Cloudera Certified Professional (CCP) certification preferred. Minimum 2 years of experience in managing and administering medium/large hadoop based environments (>100 machines), including Cloudera Data Platform (CDP) experience is highly desirable. Familiarity with Open Data Lake components such as Ozone, Iceberg, Spark, Flink, etc. Familiarity with containerization and orchestration technologies (e.g. Docker, Kubernetes, OpenShift) is a plus Design,develop and maintain Airflow DAGs and tasks to automate BAU processes,ensuring they are robust,scalable and efficient. PhonePe Full Time Employee Benefits (Not applicable for Intern or Contract Roles) Insurance Benefits - Medical Insurance, Critical Illness Insurance, Accidental Insurance, Life Insurance Wellness Program - Employee Assistance Program, Onsite Medical Center, Emergency Support System Parental Support - Maternity Benefit, Paternity Benefit Program, Adoption Assistance Program, Day-care Support Program Mobility Benefits - Relocation benefits, Transfer Support Policy, Travel Policy Retirement Benefits - Employee PF Contribution, Flexible PF Contribution, Gratuity, NPS, Leave Encashment Other Benefits - Higher Education Assistance, Car Lease, Salary Advance Policy Working at PhonePe is a rewarding experience! Great people, a work environment that thrives on creativity, the opportunity to take on roles beyond a defined job description are just some of the reasons you should work with us. Read more about PhonePe on our blog. Life at PhonePe PhonePe in the news Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
DAZN is a tech-first sport streaming platform that reaches millions of users every week. We are challenging a traditional industry and giving power back to the fans. Our new Hyderabad tech hub will be the engine that drives us forward to the future. We’re pushing boundaries and doing things no-one has done before. Here, you have the opportunity to make your mark and the power to make change happen - to make a difference for our customers. When you join DAZN you will work on projects that impact millions of lives thanks to your critical contributions to our global products This is the perfect place to work if you are passionate about technology and want an opportunity to use your creativity to help grow and scale a global range of IT systems, Infrastructure, and IT Services. Our cutting-edge technology allows us to stream sports content to millions of concurrent viewers globally across multiple platforms and devices. DAZN’s Cloud based architecture unifies a range of technologies to deliver a seamless user experience and support a global user base and company infrastructure. This role will be based in our brand-new Hyderabad office. Join us in India’s beautiful “City of Pearls” and bring your ambition to life. We are seeking a skilled and experienced Data Engineer to join our fast-paced and innovative Data Science team. This role involves building and maintaining data pipelines across multiple cloud-based data platforms. Requirements: A minimum of 5 years of total experience, with at least 3–4 years specifically in Data Engineering on a cloud platform. Key Skills & Experience: Proficiency with AWS services such as Glue, Redshift, S3, Lambda, RDS , Amazon Aurora ,DynamoDB ,EMR, Athena, Data Pipeline , Batch Job. Strong expertise in: SQL and Python DBT and Snowflake OpenSearch, Apache NiFi, and Apache Kafka In-depth knowledge of ETL data patterns and Spark-based ETL pipelines. Advanced skills in infrastructure provisioning using Terraform and other Infrastructure-as-Code (IaC) tools. Hands-on experience with cloud-native delivery models, including PaaS, IaaS, and SaaS. Proficiency in Kubernetes, container orchestration, and CI/CD pipelines. Familiarity with GitHub Actions, GitLab, and other leading DevOps and CI/CD solutions. Experience with orchestration tools such as Apache Airflow and serverless/FaaS services. Exposure to NoSQL databases is a plus Show more Show less
Posted 1 week ago
8.0 years
0 Lacs
Hyderābād
On-site
Why We Work at Dun & Bradstreet Dun & Bradstreet unlocks the power of data through analytics, creating a better tomorrow. Each day, we are finding new ways to strengthen our award-winning culture and accelerate creativity, innovation and growth. Our 6,000+ global team members are passionate about what we do. We are dedicated to helping clients turn uncertainty into confidence, risk into opportunity and potential into prosperity. Bold and diverse thinkers are always welcome. Come join us! Learn more at dnb.com/careers. Our global community of colleagues bring a diverse range of experiences and perspectives to our work. You'll find us working from a corporate office or plugging in from a home desk, listening to our customers and collaborating on solutions. Our products and solutions are vital to businesses of every size, scope and industry. And at the heart of our work, you’ll find our core values: to be data inspired, relentlessly curious and inherently generous. Our values are the constant touchstone of our community; they guide our behavior and anchor our decisions. Key Responsibilities: Design and Develop Data Pipelines: Architect, build, and deploy scalable and efficient data pipelines within our Big Data ecosystem using Apache Spark and Apache Airflow. Document new and existing pipelines and datasets to ensure clarity and maintainability. Data Architecture and Management : Demonstrate familiarity with data pipelines, data lakes, and modern data warehousing practices, including virtual data warehouses and push-down analytics. Design and implement distributed data processing solutions using technologies like Apache Spark and Hadoop. Programming and Scripting : Exhibit expert-level programming skills in Python, with the ability to write clean, efficient, and maintainable code. Cloud Infrastructure: Utilize cloud-based infrastructures (AWS/GCP) and their various services, including compute resources, databases, and data warehouses. Manage and optimize cloud-based data infrastructure, ensuring efficient data storage and retrieval. Workflow Orchestration: Develop and manage workflows using Apache Airflow for scheduling and orchestrating data processing jobs. Create and maintain Apache Airflow DAGs for workflow orchestration. Big Data Architecture : Possess strong knowledge of Big Data architecture, including cluster installation, configuration, monitoring, security, resource management, maintenance, and performance tuning. Innovation and Optimization : Create detailed designs and proof-of-concepts (POCs) to enable new workloads and technical capabilities on the platform. Collaborate with platform and infrastructure engineers to implement these capabilities in production. Manage workloads and optimize resource allocation and scheduling across multiple tenants to fulfill service level agreements (SLAs). Continuous Learning and Collaboration: Participate in planning activities and collaborate with data science teams to enhance platform skills and capabilities. Key Skills: Minimum 8+ years of hands-on experience in Big Data technologies, including a minimum of 3 year's experience working with Spark, Pyspark. Experience with Google Cloud Platform (GCP) is preferred, particularly with Dataproc, and at least 6 years of experience in cloud environments is required. Must have hands-on experience in managing cloud-deployed solutions, preferably on AWS, along with NoSQL and Graph databases. Prior experience working in a global organization and within a DevOps model is considered a strong plus. All Dun & Bradstreet job postings can be found at https://www.dnb.com/about-us/careers-and-people/joblistings.html and https://jobs.lever.co/dnb. Official communication from Dun & Bradstreet will come from an email address ending in @dnb.com. Notice to Applicants: Please be advised that this job posting page is hosted and powered by Lever. Your use of this page is subject to Lever's Privacy Notice and Cookie Policy, which governs the processing of visitor data on this platform.
Posted 1 week ago
3.0 years
0 Lacs
Hyderābād
Remote
ABOUT TIDE At Tide, we are building a business management platform designed to save small businesses time and money. We provide our members with business accounts and related banking services, but also a comprehensive set of connected administrative solutions from invoicing to accounting. Launched in 2017, Tide is now used by over 1 million small businesses across the world and is available to UK, Indian and German SMEs. Headquartered in central London, with offices in Sofia, Hyderabad, Delhi, Berlin and Belgrade, Tide employs over 2,000 employees. Tide is rapidly growing, expanding into new products and markets and always looking for passionate and driven people. Join us in our mission to empower small businesses and help them save time and money. ABOUT THE ROLE: As part of the team, you will be responsible for building and running the data pipelines and services that are required to support business functions/reports/dashboard.. We are heavily dependent on BigQuery/Snowflake, Airflow, Stitch/ Fivetran, DBT, Tableau/Looker for our business intelligence and embrace AWS with some GCP. As a Data Engineer you'll be: Developing end to end ETL/ELT Pipeline working with Data Analysts of business Function. Designing, developing, and implementing scalable, automated processes for data extraction, processing, and analysis in a Data Mesh architecture Mentoring Fother Junior Engineers in the Team Be a "go-to" expert for data technologies and solutions Ability to provide on the ground troubleshooting and diagnosis to architecture and design challenges Troubleshooting and resolving technical issues as they arise Looking for ways of improving both what and how data pipelines are delivered by the department Translating business requirements into technical requirements, such as entities that need to be modelled, DBT models that need to be build, timings, tests and reports Owning the delivery of data models and reports end to end Perform exploratory data analysis in order to identify data quality issues early in the process and implement tests to ensure prevent them in the future Working with Data Analysts to ensure that all data feeds are optimised and available at the required times. This can include Change Capture, Change Data Control and other "delta loading" approaches Discovering, transforming, testing, deploying and documenting data sources Applying, help defining, and championing data warehouse governance: data quality, testing, coding best practices, and peer review Building Looker Dashboard for use cases if required WHAT WE ARE LOOKING FOR: You have 3+ years of extensive development experience using snowflake or similar data warehouse technology You have working experience with DBT and other technologies of the modern data stack, such as Snowflake, Apache Airflow, Fivetran, AWS, Git ,Looker You have experience in agile processes, such as SCRUM You have extensive experience in writing advanced SQL statements and performance tuning them You have experience in Data Ingestion techniques using custom or SAAS tool like Fivetran You have experience in data modelling and can optimize existing/new data models You have experience in data mining, data warehouse solutions, and ETL, and using databases in a business environment with large-scale, complex datasets You have experience architecting analytical databases (in Data Mesh architecture) is added advantage You have experience working in agile cross-functional delivery team You have high development standards, especially for code quality, code reviews, unit testing, continuous integration and deployment You have strong technical documentation skills and the ability to be clear and precise with business users You have business-level of English and good communication skills You have basic understanding of various systems across the AWS platform ( Good to have ) Preferably, you have worked in a digitally native company, ideally fintech Experience with python, governance tool (e.g. Atlan, Alation, Collibra) or data quality tool (e.g. Great Expectations, Monte Carlo, Soda) will be added advantage Our Tech Stack: DBT Snowflake Airflow Fivetran SQL Looker WHAT YOU'LL GET IN RETURN: Make work, work for you! We are embracing new ways of working and support flexible working arrangements. With our Working Out of Office (WOO) policy our colleagues can work remotely from home or anywhere in their assigned Indian state. Additionally, you can work from a different country or Indian state for 90 days of the year. Plus, you'll get: Competitive salary Self & Family Health Insurance Term & Life Insurance OPD Benefits Mental wellbeing through Plumm Learning & Development Budget WFH Setup allowance 15 days of Privilege leaves 12 days of Casual leaves 12 days of Sick leaves 3 paid days off for volunteering or L&D activities Stock Options TIDEAN WAYS OF WORKING: At Tide, we champion a flexible workplace model that supports both in-person and remote work to cater to the specific needs of our different teams. While remote work is supported, we believe in the power of face-to-face interactions to foster team spirit and collaboration. Our offices are designed as hubs for innovation and team-building, where we encourage regular in-person gatherings to foster a strong sense of community. #LI-NN1 TIDE IS A PLACE FOR EVERYONE At Tide, we believe that we can only succeed if we let our differences enrich our culture. Our Tideans come from a variety of backgrounds and experience levels. We consider everyone irrespective of their ethnicity, religion, sexual orientation, gender identity, family or parental status, national origin, veteran, neurodiversity or differently-abled status. We celebrate diversity in our workforce as a cornerstone of our success. Our commitment to a broad spectrum of ideas and backgrounds is what enables us to build products that resonate with our members' diverse needs and lives. We are One Team and foster a transparent and inclusive environment, where everyone's voice is heard. At Tide, we thrive on diversity, embracing various backgrounds and experiences. We welcome all individuals regardless of ethnicity, religion, sexual orientation, gender identity, or disability. Our inclusive culture is key to our success, helping us build products that meet our members' diverse needs. We are One Team, committed to transparency and ensuring everyone's voice is heard. You personal data will be processed by Tide for recruitment purposes and in accordance with Tide's Recruitment Privacy Notice .
Posted 1 week ago
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About The Role Grade Level (for internal use): 10 Position Summary Our proprietary software-as-a-service helps automotive dealerships and sales teams better understand and predict exactly which customers are ready to buy, the reasons why, and the key offers and incentives most likely to close the sale. Its micro-marketing engine then delivers the right message at the right time to those customers, ensuring higher conversion rates and a stronger ROI. What You'll Do You will be part of our Data Platform & Product Insights data engineering team. As part of this agile team, you will work in our cloud native environment to Build & support data ingestion and processing pipelines in cloud. This will entail extraction, load and transformation of ‘big data’ from a wide variety of sources, both batch & streaming, using latest data frameworks and technologies Partner with product team to assemble large, complex data sets that meet functional and non-functional business requirements, ensure build out of Data Dictionaries/Data Catalogue and detailed documentation and knowledge around these data assets, metrics and KPIs. Warehouse this data, build data marts, data aggregations, metrics, KPIs, business logic that leads to actionable insights into our product efficacy, marketing platform, customer behaviour, retention etc. Build real-time monitoring dashboards and alerting systems. Coach and mentor other team members. Who You Are 6+ years of experience in Big Data and Data Engineering. Strong knowledge of advanced SQL, data warehousing concepts and DataMart designing. Have strong programming skills in SQL, Python/ PySpark etc. Experience in design and development of data pipeline, ETL/ELT process on-premises/cloud. Experience in one of the Cloud providers – GCP, Azure, AWS. Experience with relational SQL and NoSQL databases, including Postgres and MongoDB. Experience workflow management tools: Airflow, AWS data pipeline, Google Cloud Composer etc. Experience with Distributed Versioning Control environments such as GIT, Azure DevOps Building Docker images and fetch/promote and deploy to Production. Integrate Docker container orchestration framework using Kubernetes by creating pods, config Maps, deployments using terraform. Should be able to convert business queries into technical documentation. Strong problem solving and communication skills. Bachelors or an advanced degree in Computer Science or related engineering discipline. Good to have some exposure to Exposure to any Business Intelligence (BI) tools like Tableau, Dundas, Power BI etc. Agile software development methodologies. Working in multi-functional, multi-location teams Grade: 10 Location: Gurugram Hybrid Model: twice a week work from office Shift Time: 12 pm to 9 pm IST What You'll Love About Us – Do ask us about these! Total Rewards. Monetary, beneficial and developmental rewards! Work Life Balance. You can't do a good job if your job is all you do! Prepare for the Future. Academy – we are all learners; we are all teachers! Employee Assistance Program. Confidential and Professional Counselling and Consulting. Diversity & Inclusion. HeForShe! Internal Mobility. Grow with us! About AutomotiveMastermind Who we are: Founded in 2012, automotiveMastermind is a leading provider of predictive analytics and marketing automation solutions for the automotive industry and believes that technology can transform data, revealing key customer insights to accurately predict automotive sales. Through its proprietary automated sales and marketing platform, Mastermind, the company empowers dealers to close more deals by predicting future buyers and consistently marketing to them. automotiveMastermind is headquartered in New York City. For more information, visit automotivemastermind.com. At automotiveMastermind, we thrive on high energy at high speed. We’re an organization in hyper-growth mode and have a fast-paced culture to match. Our highly engaged teams feel passionately about both our product and our people. This passion is what continues to motivate and challenge our teams to be best-in-class. Our cultural values of “Drive” and “Help” have been at the core of what we do, and how we have built our culture through the years. This cultural framework inspires a passion for success while collaborating to win. What We Do Through our proprietary automated sales and marketing platform, Mastermind, we empower dealers to close more deals by predicting future buyers and consistently marketing to them. In short, we help automotive dealerships generate success in their loyalty, service, and conquest portfolios through a combination of turnkey predictive analytics, proactive marketing, and dedicated consultative services. What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 315747 Posted On: 2025-05-13 Location: Gurgaon, Haryana, India Show more Show less
Posted 1 week ago
1.0 - 2.0 years
0 Lacs
Hyderābād
On-site
Join our applied-ML team to help turn data into product features—recommendation engines, predictive scores, and intelligent dashboards that ship to real users. You’ll prototype quickly, validate with metrics, and productionise models alongside senior ML engineers. Day-to-Day Responsibilities Clean, explore, and validate datasets (Pandas, NumPy, SQL) Build and evaluate ML/DL models (scikit-learn, TensorFlow / PyTorch) Develop reproducible pipelines using notebooks → scripts → Airflow / Kubeflow Participate in feature engineering, hyper-parameter tuning, and model-selection experiments Package and expose models as REST/gRPC endpoints; monitor drift & accuracy in prod Share insights with stakeholders through visualisations and concise reports Must-Have Skills 1–2 years building ML models in Python Solid understanding of supervised learning workflows (train/validate/test, cross-validation, metrics) Practical experience with at least one deep-learning framework (TensorFlow or PyTorch) Strong data-wrangling skills (Pandas, SQL) and basic statistics (A/B testing, hypothesis testing) Version-control discipline (Git) and comfort with Jupyter-based experimentation Good-to-Have Familiarity with MLOps tooling (MLflow, Weights & Biases, Sagemaker) Exposure to cloud data platforms (BigQuery, Snowflake, Redshift) Knowledge of NLP or CV libraries (spaCy, Hugging Face Transformers, OpenCV) Experience containerising ML services with Docker and orchestrating with Kubernetes Basic understanding of data-privacy and responsible-AI principles Job Types: Full-time, Permanent Pay: From ₹19,100.00 per month Benefits: Health insurance Life insurance Paid sick time Paid time off Provident Fund Schedule: Fixed shift Monday to Friday Experience: Junior Machine-Learning Engineer: 1 year (Preferred) Work Location: In person
Posted 1 week ago
6.0 years
0 Lacs
Hyderābād
On-site
About Sanofi: We are an innovative global healthcare company, driven by one purpose: we chase the miracles of science to improve people’s lives. Our team, across some 100 countries, is dedicated to transforming the practice of medicine by working to turn the impossible into the possible. We provide potentially life-changing treatment options and life-saving vaccine protection to millions of people globally, while putting sustainability and social responsibility at the center of our ambitions. Sanofi has recently embarked into a vast and ambitious digital transformation program. A cornerstone of this roadmap is the acceleration of its data transformation and of the adoption of artificial intelligence (AI) and machine learning (ML) solutions that will accelerate Manufacturing & Supply performance and help bring drugs and vaccines to patients faster, to improve health and save lives. Who You Are: You are a dynamic Data Engineer interested in challenging the status quo to design and develop globally scalable solutions that are needed by Sanofi’s advanced analytic, AI and ML initiatives for the betterment of our global patients and customers. You are a valued influencer and leader who has contributed to making key datasets available to data scientists, analysts, and consumers throughout the enterprise to meet vital business use needs. You have a keen eye for improvement opportunities while continuing to fully comply with all data quality, security, and governance standards. Our vision for digital, data analytics and AI Join us on our journey in enabling Sanofi’s Digital Transformation through becoming an AI first organization. This means: AI Factory - Versatile Teams Operating in Cross Functional Pods: Utilizing digital and data resources to develop AI products, bringing data management, AI and product development skills to products, programs and projects to create an agile, fulfilling and meaningful work environment. Leading Edge Tech Stack: Experience building products that will be deployed globally on a leading-edge tech stack. World Class Mentorship and Training: Working with renowned leaders and academics in machine learning to further develop your skillsets There are multiple vacancies across our Digital profiles and NA region. Further assessments will be completed to determine specific function and level of hired candidates. Job Highlights: Propose and establish technical designs to meet business and technical requirements Develop and maintain data engineering solutions based on requirements and design specifications using appropriate tools and technologies Create data pipelines / ETL pipelines and optimize performance Test and validate developed solution to ensure it meets requirements Coach other members of data engineering teams on workflows, technical topics, pipeline management Create design and development documentation based on standards for knowledge transfer, training, and maintenance Work with business and products teams to understand requirements, and translate them into technical needs Adhere to and promote to best practices and standards for code management, automated testing, and deployments Leverage existing or create new standard data pipelines within Sanofi to bring value through business use cases Develop automated tests for CI/CD pipelines Gather/organize large & complex data assets, and perform relevant analysis Conduct peer reviews for quality, consistency, and rigor for production level solution Actively contribute to Data Engineering community and define leading practices and frameworks Communicate results and findings in a clear, structured manner to stakeholders Remains up to date on the company’s standards, industry practices and emerging technologies Key Functional Requirements & Qualifications: Experience working cross-functional teams to solve complex data architecture and engineering problems Demonstrated ability to learn new data and software engineering technologies in short amount of time Good understanding of agile/scrum development processes and concepts Able to work in a fast-paced, constantly evolving environment and manage multiple priorities Strong technical analysis and problem-solving skills related to data and technology solutions Excellent written, verbal, and interpersonal skills with ability to communicate ideas, concepts and solutions to peers and leaders Pragmatic and capable of solving complex issues, with technical intuition and attention to detail Service-oriented, flexible, and approachable team player Fluent in English (Other languages a plus) Key Technical Requirements & Qualifications: Bachelor’s Degree or equivalent in Computer Science, Engineering, or relevant field 6+ years of experience in data engineering, integration, data warehousing, business intelligence, business analytics, or comparable role with relevant technologies and tools, such as Spark/Scala, Informatica/IICS/Dbt Understanding of data structures and algorithms Working knowledge of scripting languages (Python, Shell scripting) Experience in cloud-based data platforms (Snowflake is a plus) Experience with job scheduling and orchestration (Airflow is a plus) Good knowledge of SQL and relational databases technologies/concepts Experience working with data models and query tuning Nice to haves: Experience working in life sciences/pharmaceutical industry is a plus Familiarity with data ingestion through batch, near real-time, and streaming environments Familiarity with data warehouse concepts and architectures (data mesh a plus) Familiarity with Source Code Management Tools (GitHub a plus) Pursue Progress Discover Extraordinary Better is out there. Better medications, better outcomes, better science. But progress doesn’t happen without people – people from different backgrounds, in different locations, doing different roles, all united by one thing: a desire to make miracles happen. So, let’s be those people. Watch our ALL IN video and check out our Diversity, Equity and Inclusion actions at sanofi.com! Sanofi is an equal opportunity employer committed to diversity and inclusion. Our goal is to attract, develop and retain highly talented employees from diverse backgrounds, allowing us to benefit from a wide variety of experiences and perspectives. We welcome and encourage applications from all qualified applicants. Accommodations for persons with disabilities required during the recruitment process are available upon request. Thank you in advance for your interest. Only those candidates selected for interviews will be contacted. null
Posted 1 week ago
3.0 years
0 Lacs
Delhi
Remote
ABOUT TIDE At Tide, we are building a business management platform designed to save small businesses time and money. We provide our members with business accounts and related banking services, but also a comprehensive set of connected administrative solutions from invoicing to accounting. Launched in 2017, Tide is now used by over 1 million small businesses across the world and is available to UK, Indian and German SMEs. Headquartered in central London, with offices in Sofia, Hyderabad, Delhi, Berlin and Belgrade, Tide employs over 2,000 employees. Tide is rapidly growing, expanding into new products and markets and always looking for passionate and driven people. Join us in our mission to empower small businesses and help them save time and money. ABOUT THE ROLE: As part of the team, you will be responsible for building and running the data pipelines and services that are required to support business functions/reports/dashboard.. We are heavily dependent on BigQuery/Snowflake, Airflow, Stitch/ Fivetran, DBT, Tableau/Looker for our business intelligence and embrace AWS with some GCP. As a Data Engineer you'll be: Developing end to end ETL/ELT Pipeline working with Data Analysts of business Function. Designing, developing, and implementing scalable, automated processes for data extraction, processing, and analysis in a Data Mesh architecture Mentoring Fother Junior Engineers in the Team Be a "go-to" expert for data technologies and solutions Ability to provide on the ground troubleshooting and diagnosis to architecture and design challenges Troubleshooting and resolving technical issues as they arise Looking for ways of improving both what and how data pipelines are delivered by the department Translating business requirements into technical requirements, such as entities that need to be modelled, DBT models that need to be build, timings, tests and reports Owning the delivery of data models and reports end to end Perform exploratory data analysis in order to identify data quality issues early in the process and implement tests to ensure prevent them in the future Working with Data Analysts to ensure that all data feeds are optimised and available at the required times. This can include Change Capture, Change Data Control and other "delta loading" approaches Discovering, transforming, testing, deploying and documenting data sources Applying, help defining, and championing data warehouse governance: data quality, testing, coding best practices, and peer review Building Looker Dashboard for use cases if required WHAT WE ARE LOOKING FOR: You have 3+ years of extensive development experience using snowflake or similar data warehouse technology You have working experience with DBT and other technologies of the modern data stack, such as Snowflake, Apache Airflow, Fivetran, AWS, Git ,Looker You have experience in agile processes, such as SCRUM You have extensive experience in writing advanced SQL statements and performance tuning them You have experience in Data Ingestion techniques using custom or SAAS tool like Fivetran You have experience in data modelling and can optimize existing/new data models You have experience in data mining, data warehouse solutions, and ETL, and using databases in a business environment with large-scale, complex datasets You have experience architecting analytical databases (in Data Mesh architecture) is added advantage You have experience working in agile cross-functional delivery team You have high development standards, especially for code quality, code reviews, unit testing, continuous integration and deployment You have strong technical documentation skills and the ability to be clear and precise with business users You have business-level of English and good communication skills You have basic understanding of various systems across the AWS platform ( Good to have ) Preferably, you have worked in a digitally native company, ideally fintech Experience with python, governance tool (e.g. Atlan, Alation, Collibra) or data quality tool (e.g. Great Expectations, Monte Carlo, Soda) will be added advantage Our Tech Stack: DBT Snowflake Airflow Fivetran SQL Looker WHAT YOU'LL GET IN RETURN: Make work, work for you! We are embracing new ways of working and support flexible working arrangements. With our Working Out of Office (WOO) policy our colleagues can work remotely from home or anywhere in their assigned Indian state. Additionally, you can work from a different country or Indian state for 90 days of the year. Plus, you'll get: Competitive salary Self & Family Health Insurance Term & Life Insurance OPD Benefits Mental wellbeing through Plumm Learning & Development Budget WFH Setup allowance 15 days of Privilege leaves 12 days of Casual leaves 12 days of Sick leaves 3 paid days off for volunteering or L&D activities Stock Options TIDEAN WAYS OF WORKING: At Tide, we champion a flexible workplace model that supports both in-person and remote work to cater to the specific needs of our different teams. While remote work is supported, we believe in the power of face-to-face interactions to foster team spirit and collaboration. Our offices are designed as hubs for innovation and team-building, where we encourage regular in-person gatherings to foster a strong sense of community. #LI-NN1 TIDE IS A PLACE FOR EVERYONE At Tide, we believe that we can only succeed if we let our differences enrich our culture. Our Tideans come from a variety of backgrounds and experience levels. We consider everyone irrespective of their ethnicity, religion, sexual orientation, gender identity, family or parental status, national origin, veteran, neurodiversity or differently-abled status. We celebrate diversity in our workforce as a cornerstone of our success. Our commitment to a broad spectrum of ideas and backgrounds is what enables us to build products that resonate with our members' diverse needs and lives. We are One Team and foster a transparent and inclusive environment, where everyone's voice is heard. At Tide, we thrive on diversity, embracing various backgrounds and experiences. We welcome all individuals regardless of ethnicity, religion, sexual orientation, gender identity, or disability. Our inclusive culture is key to our success, helping us build products that meet our members' diverse needs. We are One Team, committed to transparency and ensuring everyone's voice is heard. You personal data will be processed by Tide for recruitment purposes and in accordance with Tide's Recruitment Privacy Notice .
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are looking for a highly skilled Senior Data Engineer to join our team. The ideal candidate will have hands-on experience working with large-scale data platforms and a strong background in Python, PySpark, AWS, and modern data warehousing tools such as Snowflake and DBT. Familiarity with NoSQL databases like MongoDB and real-time streaming platforms like Kafka is essential. Key Responsibilities Design, build, and maintain scalable data pipelines using PySpark and Python. Work with AWS cloud services (S3, Lambda, Glue, EMR, Redshift) for data ingestion, processing, and storage. Implement and maintain ETL workflows using DBT and orchestration tools (e.g., Airflow). Design and manage data models in Snowflake, ensuring performance and reliability. Work with SQL for querying and optimizing datasets across different databases. Integrate and manage data from MongoDB, Kafka, and other streaming or NoSQL sources. Collaborate with data scientists, analysts, and other engineers to support advanced analytics and ML initiatives. Ensure data quality, lineage, and governance through best practices and tools. Required Skills & Qualifications Strong programming skills in Python and PySpark. Hands-on experience with AWS data services. Proficiency in SQL and experience with DBT for data transformation. Experience with Snowflake for data warehousing. Knowledge of MongoDB, Kafka, and data streaming concepts. Good understanding of data architecture, modeling, and data governance. Experience with CI/CD and DevOps practices in a data engineering environment is a plus. Excellent problem-solving skills and the ability to work independently or as part of a team. Show more Show less
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Build and fine-tune Generative AI models (LLMs, diffusion models, etc.) for various applications. Work with agent and multi-agent frameworks to build task-specific or collaborative AI systems. Develop and deploy ML pipelines for training, inference, and evaluation. Collaborate with cross-functional teams (Product, Data Engineering, DevOps) to integrate ML models into products. Conduct data preprocessing, exploratory analysis, and feature engineering. Stay updated with state-of-the-art research in ML/GenAI and apply it to practical problems. Optimize models for performance, scalability, and efficiency. Work with APIs like OpenAI, AZURE OpenAI, and others for rapid prototyping and deployment. Contribute to internal tools and frameworks to support ML experimentation and monitoring. Required Skills & Qualifications Bachelors or Masters degree in Computer Science, Data Science, AI/ML, or related field. 3 to 5 years of hands-on experience in Machine Learning and/or NLP projects. Proficiency in Python and popular ML libraries (e.g., PyTorch, TensorFlow, Hugging Face Transformers). Practical experience with agent and/or multi-agent frameworks (e.g., LangGraph, CrewAI, AutoGen, AutoGPT, BabyAGI, etc.) is highly desirable. Experience working with LLMs (GPT, Claude, etc.). Familiarity with prompt engineering, RAG (Retrieval-Augmented Generation), and fine-tuning techniques. Strong understanding of data structures, algorithms, and ML concepts. Experience in deploying models using tools like Docker, FastAPI, Flask, or MLflow. Experience with vector databases (e.g., PG Vector, Pinecone, Weaviate). Knowledge of MLOps tools (e.g., MLflow, Kubeflow, Airflow). Publications or contributions to open-source projects in ML/GenAI. Familiarity with ethical AI principles and responsible AI practices. Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Ghaziabad, Uttar Pradesh, India
On-site
Responsibilities As a Data Engineer, you will design, develop, and support data pipelines and related data products and platforms. Your primary responsibilities include designing and building data extraction, loading, and transformation pipelines across on-prem and cloud platforms. You will perform application impact assessments, requirements reviews, and develop work estimates. Additionally, you will develop test strategies and site reliability engineering measures for data products and solutions, participate in agile development & solution reviews, mentor junior Data Engineering Specialists, lead the resolution of critical operations issues, and perform technical data stewardship tasks, including metadata management, security, and privacy by design. Required Skills: ● Design, develop, and support data pipelines and related data products and platforms. ● Design and build data extraction, loading, and transformation pipelines and data products across on- prem and cloud platforms. ● Perform application impact assessments, requirements reviews, and develop work estimates. ● Develop test strategies and site reliability engineering measures for data products and solutions. ● Participate in agile development "scrums" and solution reviews. ● Mentor junior Data Engineers. ● Lead the resolution of critical operations issues, including post-implementation reviews. ● Perform technical data stewardship tasks, including metadata management, security, and privacy by design. ● Design and build data extraction, loading, and transformation pipelines using Python and other GCP Data Technologies ● Demonstrate SQL and database proficiency in various data engineering tasks. ● Automate data workflows by setting up DAGs in tools like Control-M, Apache Airflow, and Prefect. ● Develop Unix scripts to support various data operations. ● Model data to support business intelligence and analytics initiatives. ● Utilize infrastructure-as-code tools such as Terraform, Puppet, and Ansible for deployment automation. ● Expertise in GCP data warehousing technologies, including BigQuery, Cloud SQL, Dataflow, Data Catalog, Cloud Composer, Google Cloud Storage, IAM, Compute Engine, Cloud Data Fusion and Dataproc (good to have). Qualifications: ● Bachelor's degree in Software Engineering, Computer Science, Business, Mathematics, or related field. ● 4+ years of data engineering experience. ● 2 years of data solution architecture and design experience. ● GCP Certified Data Engineer (preferred). Interested candidates can send their resumes to riyanshi@etelligens.in Show more Show less
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
Chennai
On-site
Date: 4 Jun 2025 Company: Qualitest Group Country/Region: IN Key Responsibilities Design, develop, and deploy ML models and AI solutions across various domains such as NLP, computer vision, recommendation systems, time-series forecasting, etc. Perform data preprocessing, feature engineering, and model training using frameworks like TensorFlow, PyTorch, Scikit-learn, or similar. Collaborate with cross-functional teams to understand business problems and translate them into AI/ML solutions. Optimize models for performance, scalability, and reliability in production environments. Integrate ML pipelines with production systems using tools like MLflow, Airflow, Docker, or Kubernetes. Conduct rigorous model evaluation using metrics and validation techniques. Stay up-to-date with state-of-the-art AI/ML research and apply findings to enhance existing systems. Mentor junior engineers and contribute to best practices in ML engineering. Required Skills & Qualifications Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, or a related field. 4–8 years of hands-on experience in machine learning, deep learning, or applied AI. Proficiency in Python and ML libraries/frameworks (e.g., Scikit-learn, TensorFlow, PyTorch, XGBoost). Experience with data wrangling tools (Pandas, NumPy) and SQL/NoSQL databases. Familiarity with cloud platforms (AWS, GCP, or Azure) and ML tools (SageMaker, Vertex AI, etc.). Solid understanding of model deployment, monitoring, and CI/CD pipelines. Strong problem-solving skills and the ability to communicate technical concepts clearly.
Posted 1 week ago
0 years
0 Lacs
Chennai
On-site
Job Information Company Yubi Date Opened 06/04/2025 Job Type Full time Industry Technology City Chennai State/Province Tamil Nadu Country India Zip/Postal Code 600001 About Us Yubi stands for ubiquitous. But Yubi will also stand for transparency, collaboration, and the power of possibility. From being a disruptor in India’s debt market to marching towards global corporate markets from one product to one holistic product suite with seven products Yubi is the place to unleash potential. Freedom, not fear. Avenues, not roadblocks. Opportunity, not obstacles. Job Description Yubi, formerly known as CredAvenue, is re-defining global debt markets by freeing the flow of finance between borrowers, lenders, and investors. We are the world's possibility platform for the discovery, investment, fulfilment, and collection of any debt solution. At Yubi, opportunities are plenty and we equip you with tools to seize it. In March 2022, we became India’s fastest fintech and most impactful startup to join the unicorn club with a Series B fundraising round of $137 million. In 2020, we began our journey with a vision of transforming and deepening the global institutional debt market through technology. Our two-sided debt marketplace helps institutional and HNI investors find the widest network of corporate borrowers and debt products on one side and helps corporates to discover investors and access debt capital efficiently on the other side. Switching between platforms is easy, which means investors can lend, invest and trade bonds - all in one place. All 5 of our platforms shake up the traditional debt ecosystem and offer new ways of digital finance. Yubi Loans – Term loans and working capital solutions for enterprises. Yubi Invest – Bond issuance and investments for institutional and retail participants. Yubi Pool– End-to-end securitisations and portfolio buyouts. Yubi Flow – A supply chain platform that offers trade financing solutions. Yubi Co.Lend – For banks and NBFCs for co-lending partnerships. Currently, we have boarded over 4000+ corporates, 350+ investors and have facilitated debt volumes of over INR 40,000 crore. Backed by marquee investors like Insight Partners, B Capital Group, Dragoneer, Sequoia Capital, LightSpeed and Lightrock, we are the only-of-its-kind debt platform globally, revolutionising the segment. At Yubi, people are at the core of the business and our most valuable assets. Yubi is constantly growing, with 650+ like-minded individuals today, who are changing the way people perceive debt. We are a fun bunch who are highly motivated and driven to create a purposeful impact. Come, join the club to be a part of our epic growth story. Job description Design, build, and maintain scalable and reliable data pipelines for the ingestion, transformation, and delivery of large datasets. Collaborate with analytics and business teams to understand data requirements and deliver actionable datasets. Develop and optimize ETL processes using modern data engineering tools and frameworks (e.g., Apache Airflow, Spark, SQL). Ensure data quality, integrity, and security across all stages of the data lifecycle. Implement and monitor data solutions on cloud platforms (AWS, GCP, or Azure). Troubleshoot and resolve data pipeline and infrastructure issues with a focus on continuous improvement. Build and maintain data models, warehouses, and marts to support advanced analytics and reporting. Document data architecture, workflows, and processes for internal teams. Work closely with Data Scientists and Analysts to enable advanced analytics and machine learning initiatives. Stay updated with industry trends and best practices in data engineering and analytics. Requirements Experience & Expertise :
Posted 1 week ago
0 years
4 - 8 Lacs
Calcutta
On-site
Ready to shape the future of work? At Genpact, we don’t just adapt to change—we drive it. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory , our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that’s shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Lead Consultant- Snowflake Data Engineer ( Snowflake+ Python+Cloud ) ! In this role, the Snowflake Data Engineer is responsible for providing technical direction and lead a group of one or more developer to address a goal. Job Description: Experience in IT industry Working experience with building productionized data ingestion and processing data pipelines in Snowflake Strong understanding on Snowflake Architecture Fully well-versed with data warehousing concepts. Expertise and excellent understanding of Snowflake features and integration of Snowflake with other data processing. Able to create the data pipeline for ETL/ELT Excellent presentation and communication skills, both written and verbal Ability to problem solve and architect in an environment with unclear requirements. Able to create the high level and low-level design document based on requirement. Hands on experience in configuration, troubleshooting, testing and managing data platforms, on premises or in the cloud. Awareness on data visualisation tools and methodologies Work independently on business problems and generate meaningful insights Good to have some experience/knowledge on Snowpark or Streamlit or GenAI but not mandatory. Should have experience on implementing Snowflake Best Practices Snowflake SnowPro Core Certification will be added an advantage Roles and Responsibilities: Requirement gathering, creating design document, providing solutions to customer, work with offshore team etc. Writing SQL queries against Snowflake, developing scripts to do Extract, Load, and Transform data. Hands-on experience with Snowflake utilities such as SnowSQL , Bulk copy, Snowpipe , Tasks, Streams, Time travel, Cloning, Optimizer, Metadata Manager, data sharing, stored procedures and UDFs, Snowsight , Steamlit Have experience with Snowflake cloud data warehouse and AWS S3 bucket or Azure blob storage container for integrating data from multiple source system. Should have have some exp on AWS services (S3, Glue, Lambda) or Azure services ( Blob Storage, ADLS gen2, ADF) Should have good experience in Python/ Pyspark.integration with Snowflake and cloud (AWS/Azure) with ability to leverage cloud services for data processing and storage. Proficiency in Python programming language, including knowledge of data types, variables, functions, loops, conditionals, and other Python-specific concepts. Knowledge of ETL (Extract, Transform, Load) processes and tools, and ability to design and develop efficient ETL jobs using Python or Pyspark . Should have some experience on Snowflake RBAC and data security. Should have good experience in implementing CDC or SCD type-2. Should have good experience in implementing Snowflake Best Practices In-depth understanding of Data Warehouse, ETL concepts and Data Modelling Experience in requirement gathering, analysis, designing, development, and deployment. Should Have experience building data ingestion pipeline Optimize and tune data pipelines for performance and scalability Able to communicate with clients and lead team. Proficiency in working with Airflow or other workflow management tools for scheduling and managing ETL jobs. Good to have experience in deployment using CI/CD tools and exp in repositories like Azure repo , Github etc. Qualifications we seek in you! Minimum qualifications B.E./ Masters in Computer Science , Information technology, or Computer engineering or any equivalent degree with good IT experience and relevant as Snowflake Data Engineer. Skill Metrix: Snowflake, Python/ PySpark , AWS/Azure, ETL concepts, & Data Warehousing concepts Why join Genpact? Be a transformation leader – Work at the cutting edge of AI, automation, and digital innovation Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career – Get hands-on experience, mentorship, and continuous learning opportunities Work with the best – Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Lead Consultant Primary Location India-Kolkata Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jun 3, 2025, 8:06:52 PM Unposting Date Dec 1, 2025, 12:06:52 AM Master Skills List Digital Job Category Full Time
Posted 1 week ago
9.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
About Tide At Tide, we are building a business management platform designed to save small businesses time and money. We provide our members with business accounts and related banking services, but also a comprehensive set of connected administrative solutions from invoicing to accounting. Launched in 2017, Tide is now used by over 1 million small businesses across the world and is available to UK, Indian and German SMEs. Headquartered in central London, with offices in Sofia, Hyderabad, Delhi, Berlin and Belgrade, Tide employs over 2,000 employees. Tide is rapidly growing, expanding into new products and markets and always looking for passionate and driven people. Join us in our mission to empower small businesses and help them save time and money. About The Team As part of the team, you will be responsible for building and running the data pipelines and services that are required to support business functions/reports/dashboard.. We are heavily dependent on BigQuery/Snowflake, Airflow, Stitch/Fivetran, dbt , Tableau/Looker for our business intelligence and embrace AWS with some GCP. About The Role As a Staff Data Engineer you’ll be: Developing end to end ETL/ELT Pipeline working with Data Analysts of business Function. Designing, developing, and implementing scalable, automated processes for data extraction, processing, and analysis in a Data Mesh architecture Mentoring Fother Junior Engineers in the Team Be a “go-to” expert for data technologies and solutions Ability to provide on the ground troubleshooting and diagnosis to architecture and design challenges Troubleshooting and resolving technical issues as they arise Looking for ways of improving both what and how data pipelines are delivered by the department Translating business requirements into technical requirements, such as entities that need to be modelled, DBT models that need to be build, timings, tests and reports Owning the delivery of data models and reports end to end Perform exploratory data analysis in order to identify data quality issues early in the process and implement tests to ensure prevent them in the future Working with Data Analysts to ensure that all data feeds are optimised and available at the required times. This can include Change Capture, Change Data Control and other “delta loading” approaches Discovering, transforming, testing, deploying and documenting data sources Applying, help defining, and championing data warehouse governance: data quality, testing, coding best practices, and peer review Building Looker Dashboard for use cases if required What We Are Looking For You have 9+ years of extensive development experience using snowflake or similar data warehouse technology You have working experience with dbt and other technologies of the modern data stack, such as Snowflake, Apache Airflow, Fivetran, AWS, git ,Looker You have experience in agile processes, such as SCRUM You have extensive experience in writing advanced SQL statements and performance tuning them You have experience in Data Ingestion techniques using custom or SAAS tool like fivetran You have experience in data modelling and can optimise existing/new data models You have experience in data mining, data warehouse solutions, and ETL, and using databases in a business environment with large-scale, complex datasets You have having experience architecting analytical databases (in Data Mesh architecture) is added advantage You have experience working in agile cross-functional delivery team You have high development standards, especially for code quality, code reviews, unit testing, continuous integration and deployment You have strong technical documentation skills and the ability to be clear and precise with business users You have business-level of English and good communication skills You have basic understanding of various systems across the AWS platform ( Good to have ) Preferably, you have worked in a digitally native company, ideally fintech You have experience with python, governance tool (e.g. Atlan, Alation, Collibra) or data quality tool (e.g. Great Expectations, Monte Carlo, Soda) will be added advantage Our Tech Stack DBT Snowflake Airflow Fivetran SQL Looker What You Will Get In Return Competitive salary Self & Family Health Insurance Term & Life Insurance OPD Benefits Mental wellbeing through Plumm Learning & Development Budget WFH Setup allowance 15 days of Privilege leaves 12 days of Casual leaves 12 days of Sick leaves 3 paid days off for volunteering or L&D activities Stock Options Tidean Ways Of Working At Tide, we champion a flexible workplace model that supports both in-person and remote work to cater to the specific needs of our different teams. While remote work is supported, we believe in the power of face-to-face interactions to foster team spirit and collaboration. Our offices are designed as hubs for innovation and team-building, where we encourage regular in-person gatherings to foster a strong sense of community. TIDE IS A PLACE FOR EVERYONE At Tide, we believe that we can only succeed if we let our differences enrich our culture. Our Tideans come from a variety of backgrounds and experience levels. We consider everyone irrespective of their ethnicity, religion, sexual orientation, gender identity, family or parental status, national origin, veteran, neurodiversity or differently-abled status. We celebrate diversity in our workforce as a cornerstone of our success. Our commitment to a broad spectrum of ideas and backgrounds is what enables us to build products that resonate with our members’ diverse needs and lives. We are One Team and foster a transparent and inclusive environment, where everyone’s voice is heard. At Tide, we thrive on diversity, embracing various backgrounds and experiences. We welcome all individuals regardless of ethnicity, religion, sexual orientation, gender identity, or disability. Our inclusive culture is key to our success, helping us build products that meet our members' diverse needs. We are One Team, committed to transparency and ensuring everyone’s voice is heard. You personal data will be processed by Tide for recruitment purposes and in accordance with Tide's Recruitment Privacy Notice . Show more Show less
Posted 1 week ago
5.0 years
4 - 7 Lacs
Indore
On-site
Indore, India Job Description : Full-time, On-site Role 5+ years of experience as an AI/ML Engineer Atleast 2+ years in a leadership or team-building role Experience in building powerful, real-world applications across various domains Strong communication skills and the ability to explain complex AI concepts to business stakeholders and clients. Experience in leading small to mid-sized teams. A product-first mindset and the ability to drive initiatives from idea to production. Passion for staying at the cutting edge of AI — especially in the era of LLMs and generative AI. Work in a collaborative, high-ownership, fast-moving tech environment You’ll be setting up the AI engine for a company already trusted for its tech excellence. If you’re seeking technical autonomy, client-facing impact, and team-building ownership , this is your place. Responsibilities : Lead the design, development, and deployment of AI/ML and Deep Learning models. Hire, train, and mentor a growing team of AI/ML engineers and data scientists. Own the architecture and tech stack decisions for the AI division. Engage in client discovery calls and pre-sales conversations to convert technical insights into business opportunities. Lead PoC development, client interviews, and project planning in collaboration with internal and external stakeholders. Stay updated with the latest in AI/ML (especially LLMs, vector search, generative AI, multi-agent chatbot) and translate advancements into real-world implementations. Qualifications : Bachelor’s or Master's degree in CS/IT / AI-ML / Data Science Python, NumPy, Pandas, Scikit-learn, PySpark Deep Learning: TensorFlow, PyTorch, Keras CNNs, RNNs, Attention Mechanisms, Transformers SVD (Singular Value Decomposition), KMean, Recency weight Text embeddings, semantic search, vector DBs (FAISS, Pinecone, Weaviate) Model Context Protocol (MCP) Model Lifecycle: Data preprocessing, training, tuning, evaluation (ROC, F1, AUC, confusion matrix) Botpress / Copilot Studio workflow and integration experience Deployment: RESTful ML APIs, Docker, AWS/GCP (SageMaker, Vertex AI), model versioning Workflow Tools: MLflow, Azure ML, Airflow, Git-based CI/CD Creating Rest APIs using FastAPI framework or equivalent Experience: 5+ yrs
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The airflow job market in India is rapidly growing as more companies are adopting data pipelines and workflow automation. Airflow, an open-source platform, is widely used for orchestrating complex computational workflows and data processing pipelines. Job seekers with expertise in airflow can find lucrative opportunities in various industries such as technology, e-commerce, finance, and more.
The average salary range for airflow professionals in India varies based on experience levels: - Entry-level: INR 6-8 lakhs per annum - Mid-level: INR 10-15 lakhs per annum - Experienced: INR 18-25 lakhs per annum
In the field of airflow, a typical career path may progress as follows: - Junior Airflow Developer - Airflow Developer - Senior Airflow Developer - Airflow Tech Lead
In addition to airflow expertise, professionals in this field are often expected to have or develop skills in: - Python programming - ETL concepts - Database management (SQL) - Cloud platforms (AWS, GCP) - Data warehousing
As you explore job opportunities in the airflow domain in India, remember to showcase your expertise, skills, and experience confidently during interviews. Prepare well, stay updated with the latest trends in airflow, and demonstrate your problem-solving abilities to stand out in the competitive job market. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2