Jobs
Interviews

5747 Airflow Jobs - Page 5

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Your IT Future, Delivered. Senior Software Engineer (AI/ML Engineer) With a global team of 5600+ IT professionals, DHL IT Services connects people and keeps the global economy running by continuously innovating and creating sustainable digital solutions. We work beyond global borders and push boundaries across all dimensions of logistics. You can leave your mark shaping the technology backbone of the biggest logistics company of the world. All our offices have earned #GreatPlaceToWork certification, reflecting our commitment to exceptional employee experiences. Digitalization. Simply delivered. At DHL IT Services, we are designing, building and running IT solutions for the whole DPDHL globally. Grow together. The AI & Analytics team builds and runs solutions to get much more value out of our data. We help our business colleagues all over the world with machine learning algorithms, predictive models and visualizations. We manage more than 46 AI & Big Data Applications, 3.000 active users, 87 countries and up to 100,000,000 daily transaction. Integration of AI & Big Data into business processes to compete in a data driven world needs state of the art technology. Our infrastructure, hosted on-prem and in the cloud (Azure and GCP), includes MapR, Airflow, Spark, Kafka, jupyter, Kubeflow, Jenkins, GitHub, Tableau, Power BI, Synapse (Analytics), Databricks and further interesting tools. We like to do everything in an Agile/DevOps way. No more throwing the “problem code” to support, no silos. Our teams are completely product oriented, having end to end responsibility for the success of our product. Ready to embark on the journey? Here’s what we are looking for: Currently, we are looking for AI / Machine Learning Engineer . In this role, you will have the opportunity to design and develop solutions, contribute to roadmaps of Big Data architectures and provide mentorship and feedback to more junior team members. We are looking for someone to help us manage the petabytes of data we have and turn them into value. Does that sound a bit like you? Let’s talk! Even if you don’t tick all the boxes below, we’d love to hear from you; our new department is rapidly growing and we’re looking for many people with the can-do mindset to join us on our digitalization journey. Thank you for considering DHL as the next step in your career – we do believe we can make a difference together! What will you need? University Degree in Computer Science, Information Systems, Business Administration, or related field. 2+ years of experience in the Data Scienctist / Machine Learning Engineer role Strong analytic skills related to working with structured, semi structured and unstructured datasets. Advanced Machine learning techniques: Decision Trees, Random Forest, Boosting Algorithm, Neural Networks, Deep Learning, Support Vector Machines, Clustering, Bayesian Networks, Reinforcement Learning, Feature Reduction / engineering, Anomaly deduction, Natural Language Processing (incl. sentiment analysis, Topic Modeling), Natural Language Generation. Statistics / Mathematics: Data Quality Analysis, Data identification, Hypothesis testing, Univariate / Multivariate Analysis, Cluster Analysis, Classification/PCA, Factor Analysis, Linear Modeling, Time Series, distribution / probability theory and/or Strong experience in specialized analytics tools and technologies (including, but not limited to) Lead the integration of large language models into AI applications. Very good in Python Programming. Power BI, Tableau Develop the application and deploy the model in production. Kubeflow, ML Flow, Airflow, Jenkins, CI/CD Pipeline. As an AI/ML Engineer, you will be responsible for developing applications and systems that leverage AI tools, Cloud AI services, and Generative AI models. Your role includes designing cloud-based or on-premises application pipelines that meet production-ready standards, utilizing deep learning, neural networks, chatbots, and image processing technologies. Professional & Technical Skills: Essential Skills: Expertise in Large Language Models. Strong knowledge of statistical analysis and machine learning algorithms. Experience with data visualization tools such as Tableau or Power BI. Practical experience with various machine learning algorithms, including linear regression, logistic regression, decision trees, and clustering techniques. Proficient in data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Awareness of Apache Spark, Hadoop Awareness of Agile / Scrum ways of working. Identify the right modeling approach(es) for given scenario and articulate why the approach fits. Assess data availability and modeling feasibility. Review interpretation of models results. Experience in Logistic industry domain would be added advantage. Roles & Responsibilities: Act as a Subject Matter Expert (SME). Collaborate with and manage team performance. Make decisions that impact the team. Work with various teams and contribute to significant decision-making processes. Provide solutions to challenges that affect multiple teams. Lead the integration of large language models into AI applications. Research and implement advanced AI techniques to improve system performance. Assist in the development and deployment of AI solutions across different domains. You should have: Certifications in some of the core technologies. Ability to collaborate across different teams/geographies/stakeholders/levels of seniority. Customer focus with an eye on continuous improvement. Energetic, enthusiastic and results-oriented personality. Ability to coach other team members, you must be a team player! Strong will to overcome the complexities involved in developing and supporting data pipelines. Language requirements: English – Fluent spoken and written (C1 level) An array of benefits for you: Hybrid work arrangements to balance in-office collaboration and home flexibility. Annual Leave: 42 days off apart from Public / National Holidays. Medical Insurance: Self + Spouse + 2 children. An option to opt for Voluntary Parental Insurance (Parents / Parent -in-laws) at a nominal premium covering pre existing disease. In House training programs: professional and technical training certifications.

Posted 2 days ago

Apply

12.0 - 16.0 years

2 - 9 Lacs

Hyderābād

On-site

Job description Some careers shine brighter than others If you’re looking for a career that will help you stand out, join HSBC, and fulfil your potential Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further HSBC is one of the largest banking and financial services organizations in the world, with operations in 64 countries and territories We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realize their ambitions We are currently seeking an experienced professional to join our team in the role of Consultant Specialist 12 - 16 years of experience with below requirements and skills: Advanced SQL Development: Write complex SQL queries for data extraction, transformation, and analysis. Optimize SQL queries for performance and scalability. SQL Tuning and Joins: Analyze and improve query performance. Deep understanding of joins, indexing, and query execution plans. GCP BigQuery and GCS: Work with Google BigQuery for data warehousing and analytics. Manage and integrate data using Google Cloud Storage (GCS). Airflow DAG Development: Design, develop, and maintain workflows using Apache Airflow. Write custom DAGs to automate data pipelines and processes. Python Programming: Develop and maintain Python scripts for data processing and automation. Debug and optimize Python code for performance and reliability. Shell Scripting: Write and debug basic shell scripts for automation and system tasks. Continuous Learning: Stay updated with the latest tools and technologies in data engineering. Demonstrate a strong ability and attitude to learn and adapt quickly. Communication: Collaborate effectively with cross-functional teams. Clearly communicate technical concepts to both technical and non-technical stakeholders. Requirements To be successful in this role, you should meet the following requirements: Advanced SQL writing and query optimization. Strong understanding of SQL tuning, joins, and indexing. Hands-on experience with GCP services, especially BigQuery and GCS. Proficiency in Python programming and debugging. Experience with Apache Airflow and DAG development. Basic knowledge of shell scripting. Excellent problem-solving skills and a growth mindset. Strong verbal and written communication skills. Experience with data pipeline orchestration and ETL processes. Familiarity with other GCP services like Dataflow or Pub/Sub. Knowledge of CI/CD pipelines and version control (e.g., Git). You’ll achieve more when you join HSBC www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website Issued by – HSBC Software Development India

Posted 2 days ago

Apply

7.0 years

0 Lacs

India

On-site

About Us: MatchMove is a leading embedded finance platform that empowers businesses to embed financial services into their applications. We provide innovative solutions across payments, banking-as-a-service, and spend/send management, enabling our clients to drive growth and enhance customer experiences. Are You The One? As a Technical Lead Engineer - Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to: Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements: At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum. Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale. Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. MatchMove Culture: We cultivate a dynamic and innovative culture that fuels growth, creativity, and collaboration. Our fast-paced fintech environment thrives on adaptability, agility, and open communication. We focus on employee development, supporting continuous learning and growth through training programs, learning on the job and mentorship. We encourage speaking up, sharing ideas, and taking ownership. Embracing diversity, our team spans across Asia, fostering a rich exchange of perspectives and experiences. Together, we harness the power of fintech and e-commerce to make a meaningful impact on people's lives. Personal Data Protection Act: By submitting your application for this job, you are authorizing MatchMove to: collect and use your personal data, and to disclose such data to any third party with whom MatchMove or any of its related corporation has service arrangements, in each case for all purposes in connection with your job application, and employment with MatchMove; and retain your personal data for one year for consideration of future job opportunities (where applicable).

Posted 2 days ago

Apply

10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Acuity Knowledge Partners (Acuity) is a leading provider of bespoke research, analytics and technology solutions to the financial services sector, including asset managers, corporate and investment banks, private equity and venture capital firms, hedge funds and consulting firms. Its global network of over 6,000 analysts and industry experts, combined with proprietary technology, supports more than 600 financial institutions and consulting companies to operate more efficiently and unlock their human capital, driving revenue higher and transforming operations. Acuity is headquartered in London and operates from 10 locations worldwide. The company fosters a diverse, equitable and inclusive work environment, nurturing talent, regardless of race, gender, ethnicity or sexual orientation. Acuity was established as a separate business from Moody’s Corporation in 2019, following its acquisition by Equistone Partners Europe (Equistone). In January 2023, funds advised by global private equity firm Permira acquired a majority stake in the business from Equistone, which remains invested as a minority shareholder. For more information, visit www.acuitykp.com Position Title- Associate Director (Senior Architect – Data) Department-IT Location- Gurgaon/ Bangalore Job Summary The Enterprise Data Architect will enhance the company's strategic use of data by designing, developing, and implementing data models for enterprise applications and systems at conceptual, logical, business area, and application layers. This role advocates data modeling methodologies and best practices. We seek a skilled Data Architect with deep knowledge of data architecture principles, extensive data modeling experience, and the ability to create scalable data solutions. Responsibilities include developing and maintaining enterprise data architecture, ensuring data integrity, interoperability, security, and availability, with a focus on ongoing digital transformation projects. Key Responsibilities Strategy & Planning Develop and deliver long-term strategic goals for data architecture vision and standards in conjunction with data users, department managers, clients, and other key stakeholders. Create short-term tactical solutions to achieve long-term objectives and an overall data management roadmap. Establish processes for governing the identification, collection, and use of corporate metadata; take steps to assure metadata accuracy and validity. Establish methods and procedures for tracking data quality, completeness, redundancy, and improvement. Conduct data capacity planning, life cycle, duration, usage requirements, feasibility studies, and other tasks. Create strategies and plans for data security, backup, disaster recovery, business continuity, and archiving. Ensure that data strategies and architectures are aligned with regulatory compliance. Develop a comprehensive data strategy in collaboration with different stakeholders that aligns with the transformational projects’ goals. Ensure effective data management throughout the project lifecycle. Acquisition & Deployment Ensure the success of enterprise-level application rollouts (e.g. ERP, CRM, HCM, FP&A, etc.) Liaise with vendors and service providers to select the products or services that best meet company goals Operational Management o Assess and determine governance, stewardship, and frameworks for managing data across the organization. o Develop and promote data management methodologies and standards. o Document information products from business processes and create data entities o Create entity relationship diagrams to show the digital thread across the value streams and enterprise o Create data normalization across all systems and data base to ensure there is common definition of data entities across the enterprise o Document enterprise reporting needs develop the data strategy to enable single source of truth for all reporting data o Address the regulatory compliance requirements of each country and ensure our data is secure and compliant o Select and implement the appropriate tools, software, applications, and systems to support data technology goals. o Oversee the mapping of data sources, data movement, interfaces, and analytics, with the goal of ensuring data quality. o Collaborate with project managers and business unit leaders for all projects involving enterprise data. o Address data-related problems regarding systems integration, compatibility, and multiple-platform integration. o Act as a leader and advocate of data management, including coaching, training, and career development to staff. o Develop and implement key components as needed to create testing criteria to guarantee the fidelity and performance of data architecture. o Document the data architecture and environment to maintain a current and accurate view of the larger data picture. o Identify and develop opportunities for data reuse, migration, or retirement. Data Architecture Design: Develop and maintain the enterprise data architecture, including data models, databases, data warehouses, and data lakes. Design and implement scalable, high-performance data solutions that meet business requirements. Data Governance: Establish and enforce data governance policies and procedures as agreed with stakeholders. Maintain data integrity, quality, and security within Finance, HR and other such enterprise systems. Data Migration: Oversee the data migration process from legacy systems to the new systems being put in place. Define & Manage data mappings, cleansing, transformation, and validation to ensure accuracy and completeness. Master Data Management: Devise processes to manage master data (e.g., customer, vendor, product information) to ensure consistency and accuracy across enterprise systems and business processes. Provide data management (create, update and delimit) methods to ensure master data is governed Stakeholder Collaboration: Collaborate with various stakeholders, including business users, other system vendors, and stakeholders to understand data requirements. Ensure the enterprise system meets the organization's data needs. Training and Support: Provide training and support to end-users on data entry, retrieval, and reporting within the candidate enterprise systems. Promote user adoption and proper use of data. 10 Data Quality Assurance: Implement data quality assurance measures to identify and correct data issues. Ensure the Oracle Fusion and other enterprise systems contain reliable and up-to-date information. Reporting and Analytics: Facilitate the development of reporting and analytics capabilities within the Oracle Fusion and other systems Enable data-driven decision-making through robust data analysis. Continuous Improvement: Continuously monitor and improve data processes and the Oracle Fusion and other system's data capabilities. Leverage new technologies for enhanced data management to support evolving business needs. Technology and Tools: Oracle Fusion Cloud Data modeling tools (e.g., ER/Studio, ERwin) ETL tools (e.g., Informatica, Talend, Azure Data Factory) Data Pipelines: Understanding of data pipeline tools like Apache Airflow and AWS Glue. Database management systems: Oracle Database, MySQL, SQL Server, PostgreSQL, MongoDB, Cassandra, Couchbase, Redis, Hadoop, Apache Spark, Amazon RDS, Google BigQuery, Microsoft Azure SQL Database, Neo4j, OrientDB, Memcached) Data governance tools (e.g., Collibra, Informatica Axon, Oracle EDM, Oracle MDM) Reporting and analytics tools (e.g., Oracle Analytics Cloud, Power BI, Tableau, Oracle BIP) Hyperscalers / Cloud platforms (e.g., AWS, Azure) Big Data Technologies such as Hadoop, HDFS, MapReduce, and Spark Cloud Platforms such as Amazon Web Services, including RDS, Redshift, and S3, Microsoft Azure services like Azure SQL Database and Cosmos DB and experience in Google Cloud Platform services such as BigQuery and Cloud Storage. Programming Languages: (e.g. using Java, J2EE, EJB, .NET, WebSphere, etc.) SQL: Strong SQL skills for querying and managing databases. Python: Proficiency in Python for data manipulation and analysis. Java: Knowledge of Java for building data-driven applications. Data Security and Protocols: Understanding of data security protocols and compliance standards. Key Competencies Qualifications: Education: Bachelor’s degree in computer science, Information Technology, or a related field. Master’s degree preferred. Experience: 10+ years overall and at least 7 years of experience in data architecture, data modeling, and database design. Proven experience with data warehousing, data lakes, and big data technologies. Expertise in SQL and experience with NoSQL databases. Experience with cloud platforms (e.g., AWS, Azure) and related data services. Experience with Oracle Fusion or similar ERP systems is highly desirable. Skills: Strong understanding of data governance and data security best practices. Excellent problem-solving and analytical skills. Strong communication and interpersonal skills. Ability to work effectively in a collaborative team environment. Leadership experience with a track record of mentoring and developing team members. Excellent in documentation and presentations. Good knowledge of applicable data privacy practices and laws. Certifications: Relevant certifications (e.g., Certified Data Management Professional, AWS Certified Big Data – Specialty) are a plus. Behavioral A self-starter, an excellent planner and executor and above all, a good team player Excellent communication skills and inter-personal skills are a must Must possess organizational skills, including multi-task capability, priority setting and meeting deadlines Ability to build collaborative relationships and effectively leverage networks to mobilize resources Initiative to learn business domain is highly desirable Likes dynamic and constantly evolving environment and requirements

Posted 2 days ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

Remote

Every day, tens of millions of people come to Roblox to explore, create, play, learn, and connect with friends in 3D immersive digital experiences– all created by our global community of developers and creators. At Roblox, we’re building the tools and platform that empower our community to bring any experience that they can imagine to life. Our vision is to reimagine the way people come together, from anywhere in the world, and on any device. We’re on a mission to connect a billion people with optimism and civility, and looking for amazing talent to help us get there. A career at Roblox means you’ll be working to shape the future of human interaction, solving unique technical challenges at scale, and helping to create safer, more civil shared experiences for everyone. Roblox Operating System (ROS) is our internal productivity platform that governs how Roblox operates as a company. Through an integrated suite of tools, ROS shapes how we make talent and personnel decisions, plan and organize work, discover knowledge, and scale efficiently. We are seeking a Senior Data Engineer to enhance our data posture and architecture, synchronizing data across vital third-party systems like Workday, Greenhouse, GSuite, and JIRA, as well as our internal Roblox OS application database. Our Roblox OS app suite encompasses internal tools and third-party applications for People Operations, Talent Acquisition, Budgeting, Roadmapping, and Business Analytics. We envision an integrated platform that streamlines processes while providing employees and leaders with the information they need to support the business. This is a new team in our Roblox India location, working closely with data scientists & analysts, product & engineering, and other stakeholders in India & US. You will report to the Engineering Manager of the Roblox OS Team in your local location and collaborate with Roblox internal teams globally. Work Model : This role is based in Gurugram and follows a hybrid structure — 3 days from the office (Tuesday, Wednesday & Thursday) and 2 days work from home. Shift Time : 2:00pm - 10:30pm IST (Cabs will be provided) You Will Design and Build Scalable Data Pipelines: Architect, develop, and maintain robust, scalable data pipelines using orchestration frameworks like Airflow to synchronize data between internal systems. Implement and Optimize ETL Processes: Apply strong understanding of ETL (Extract, Transform, Load) processes and best practices for seamless data integration and transformation. Develop Data Solutions with SQL: Utilize your proficiency in SQL and relational databases (e.g., PostgreSQL) for advanced querying, data modeling, and optimizing data solutions. Contribute to Data Architecture: Actively participate in data architecture and implementation discussions, ensuring data integrity and efficient data transposition. Manage and optimize data infrastructure, including database, cloud storage solutions, and API endpoints. Write High-Quality Code: Focus on developing clear, readable, testable, modular, and well-monitored code for data manipulation, automation, and software development with a strong emphasis on data integrity. Troubleshoot and Optimize Performance: Apply excellent analytical and problem-solving skills to diagnose data issues and optimize pipeline performance. Collaborate Cross-Functionally: Work effectively with cross-functional teams, including data scientists, analysts, and business stakeholders, to translate business needs into technical data solutions. Ensure Data Governance and Security: Implement data anonymization and pseudonymization techniques to protect sensitive data, and contribute to master data management (MDM) concepts including data quality, lineage, and governance frameworks. You Have Data Engineering Expertise: At least 6+ Proven experience designing, building, and maintaining scalable data pipelines, coupled with a strong understanding of ETL processes and best practices for data integration. Database and Data Warehousing Proficiency: Deep proficiency in SQL and relational databases (e.g., PostgreSQL), and familiarity with at least one cloud-based data warehouse solution (e.g., Snowflake, Redshift, BigQuery). Technical Acumen: Strong scripting skills for data manipulation and automation. Familiarity with data streaming platforms (e.g., Kafka, Kinesis), and knowledge of containerization (e.g., Docker) and cloud infrastructure (e.g., AWS, Azure, GCP) for deploying and managing data solutions. Data & Cloud Infrastructure Management: Experience with managing and optimizing data infrastructure, including database, cloud storage solutions, and configuring API endpoints. Software Development Experience: Experience in software development with a focus on data integrity and transposition, and a commitment to writing clear, readable, testable, modular, and well-monitored code. Problem-Solving & Collaboration Skills: Excellent analytical and problem-solving abilities to troubleshoot complex data issues, combined with strong communication and collaboration skills to work effectively across teams. Passion for Data: A genuine passion for working with amounts of data from various sources, understanding the critical impact of data quality on company strategy at an executive level. Adaptability: Ability to thrive and deliver results in a fast-paced environment with competing priorities. Roles that are based in an office are onsite Tuesday, Wednesday, and Thursday, with optional presence on Monday and Friday (unless otherwise noted). Roblox provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. Roblox also provides reasonable accommodations for all candidates during the interview process.

Posted 2 days ago

Apply

8.0 - 12.0 years

2 - 9 Lacs

Hyderābād

On-site

Job description Some careers shine brighter than others If you’re looking for a career that will help you stand out, join HSBC, and fulfil your potential Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further HSBC is one of the largest banking and financial services organizations in the world, with operations in 64 countries and territories We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realize their ambitions We are currently seeking an experienced professional to join our team in the role of Consultant Specialist 8 - 12 years of experience with below requirements and skills: Advanced SQL Development: Write complex SQL queries for data extraction, transformation, and analysis. Optimize SQL queries for performance and scalability. SQL Tuning and Joins: Analyze and improve query performance. Deep understanding of joins, indexing, and query execution plans. GCP BigQuery and GCS: Work with Google BigQuery for data warehousing and analytics. Manage and integrate data using Google Cloud Storage (GCS). Airflow DAG Development: Design, develop, and maintain workflows using Apache Airflow. Write custom DAGs to automate data pipelines and processes. Python Programming: Develop and maintain Python scripts for data processing and automation. Debug and optimize Python code for performance and reliability. Shell Scripting: Write and debug basic shell scripts for automation and system tasks. Continuous Learning: Stay updated with the latest tools and technologies in data engineering. Demonstrate a strong ability and attitude to learn and adapt quickly. Communication: Collaborate effectively with cross-functional teams. Clearly communicate technical concepts to both technical and non-technical stakeholders. Requirements To be successful in this role, you should meet the following requirements: Advanced SQL writing and query optimization. Strong understanding of SQL tuning, joins, and indexing. Hands-on experience with GCP services, especially BigQuery and GCS. Proficiency in Python programming and debugging. Experience with Apache Airflow and DAG development. Basic knowledge of shell scripting. Excellent problem-solving skills and a growth mindset. Strong verbal and written communication skills. Experience with data pipeline orchestration and ETL processes. Familiarity with other GCP services like Dataflow or Pub/Sub. Knowledge of CI/CD pipelines and version control (e.g., Git). You’ll achieve more when you join HSBC www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website Issued by – HSBC Software Development India

Posted 2 days ago

Apply

5.0 - 10.0 years

5 - 10 Lacs

Gurgaon

On-site

Manager EXL/M/1436956 ServicesGurgaon Posted On 31 Jul 2025 End Date 14 Sep 2025 Required Experience 5 - 10 Years Basic Section Number Of Positions 1 Band C1 Band Name Manager Cost Code D014049 Campus/Non Campus NON CAMPUS Employment Type Permanent Requisition Type New Max CTC 1800000.0000 - 3200000.0000 Complexity Level Not Applicable Work Type Hybrid – Working Partly From Home And Partly From Office Organisational Group Analytics Sub Group Retail Media & Hi-Tech Organization Services LOB Retail Media & Hi-Tech SBU Analytics Country India City Gurgaon Center EXL - Gurgaon Center 38 Skills Skill DATA ANALYTICS PROJECT MANAGEMENT STAKEHOLDER MANAGEMENT SQL AWS Minimum Qualification ANY GRADUATE Certification No data available Job Description Job Description Key responsibilities: Project Leadership & Execution Own the end-to-end execution of data analytics and architecture projects, ensuring alignment with business objectives. Oversee 3-4 Data Analysts working on data assessment, gap analysis, and data architecture, ensuring timely delivery and quality outcomes. Develop and maintain project roadmaps, timelines, and deliverables for analytics initiatives. Identify and mitigate risks, dependencies, and bottlenecks to ensure smooth project execution Data Analysis & Process Optimization Work closely with the Data Analyst to evaluate data completeness, consistency, and accuracy across multiple datasets. Lead gap analysis efforts to identify discrepancies and opportunities for data integration and quality improvement. Work along with the data analyst on data mapping exercises, and addressing adhoc requests in a timely manner Oversee and work on enhancements to the current process, focussing on automation Stakeholder Management & Collaboration Serve as the primary liaison between technical teams, business units, and leadership, ensuring clear communication and alignment. Translate business requirements into technical roadmaps for data-driven initiatives. Facilitate cross-functional collaboration between data engineers, analysts, business stakeholders, and IT teams. Present insights, progress, and impact of data projects to senior leadership and stakeholders. Skills required: Project Management experience Strong communication & Stakeholder management skills Tools: SQL, Excel, Airflow, AWS, Monte Carlo, Python (good to have) Problem-Solving Mindset: Ability to identify data gaps, optimize workflows, and drive process improvements Data & Analytics Knowledge: Understanding of data assessment, gap analysis, data architecture, and data governance. Workflow Workflow Type L&S-DA-Consulting

Posted 2 days ago

Apply

1.0 - 4.0 years

3 - 6 Lacs

Bengaluru, Karnataka, India

On-site

About Kazam:We are an agnostic EV charging software platform building Indias largest smart and affordable EV charging network Through our partnerships with fleets, CPOs, RWAs, and OEMs we have been able to create a robust charging network with over 7000 devices on our platform Kazam is enabling fleet companies, charge point operators, OEMs by providing affordable and complete software stack like white label template app (both android & iOS), API integration, load management solution & charger monitoring dashboard so that you can do hassle free business without worrying about technology (Please note that you can use both Kazam chargers and OCPP enabled charging points via our platform) Not only that, we are able to drive utilisation to your charging station leveraging Kazam s network for 50,000+ EV drivers Through our partnerships with Fleets, CPOs, RWAs and OEMs we have been able to create a robust charging network with over 11000+ devices on our platform Key ResponsibilitiesWork with analytics teams to ensure data is clean, structured, and accessible for analysis and reporting Implement data quality and governance frameworks to ensure data integrity across the organization Contribute to data exploration and analysis projects by delivering robust, reusable data pipelines that support deep data analysis Design, implement, and optimize scalable data architectures, including data lakes, data warehouses, and real-time streaming solutions Develop and maintain ETL/ELT pipelines to ensure efficient data flow from multiple sources Leverage automation to streamline data ingestion, processing, and integration tasks Develop and maintain scripts for data automation and orchestration, ensuring timely and accurate delivery of data products Work closely with DevOps and Cloud teams to ensure data infrastructure is secure, reliable, and scalable Qualifications & SkillsTechnical Skills:ETL/ELT: Proficient in building and maintaining ETL/ELT processes using tools such as Apache Airflow, DBT, Talend, or custom scripts in Python, SQL, NoSQL etc Analytics: Strong understanding of data analytics concepts, with experience in creating data models and working closely with BI/Analytics teams Automation: Hands-on experience with data automation tools (e g, Apache Airflow, Prefect) and scripting (Python, Shell, etc) to automate data workflows Data Architecture: Experience in designing and maintaining data lakes, warehouses, and real-time streaming architectures using technologies like AWS/GCP/Azure, Hadoop, Spark, Kafka, etc Soft Skills:Excellent problem-solving skills and ability to work independently and as part of a team Ability to collaborate cross-functionally with analytics, business intelligence, and product teams Strong communication skills with the ability to translate complex technical concepts for non-technical stakeholders Attention to detail and commitment to data quality and governance

Posted 2 days ago

Apply

0 years

0 Lacs

Haryana

On-site

Overview: The Analytics Engineer I plays a key role in supporting the organization's BI systems and data platforms. This individual will focus on learning and assisting with tasks related to data quality, operational efficiency, and real-time analytics. They will work under the guidance of experienced BI Engineers to implement and maintain data ingestion pipelines, monitoring systems, and reporting solutions. This role offers a great opportunity to gain hands-on experience in BI tools like Snowflake Data Cloud, Sigma Computing and develop a strong foundation in data. Prodege: A cutting-edge marketing and consumer insights platform, Prodege has charted a course of innovation in the evolving technology landscape by helping leading brands, marketers, and agencies uncover the answers to their business questions, acquire new customers, increase revenue, and drive brand loyalty & product adoption. Bolstered by a major investment by Great Hill Partners in Q4 2021 and strategic acquisitions of Pollfish, BitBurst & AdGate Media in 2022, Prodege looks forward to more growth and innovation to empower our partners to gather meaningful, rich insights and better market to their target audiences. As an organization, we go the extra mile to “Create Rewarding Moments” every day for our partners, consumers, and team. Come join us today! Primary Objectives: Elicit and translate business needs: Collaborate with stakeholders to understand their data and reporting requirements, translating them into actionable technical specifications. Transform data into insights: Assist with designing and developing clear dashboards and visualizations, effectively communicating key business metrics and trends. Ensure data integrity: Maintain and optimize data pipelines ensuring accurate and timely delivery of data to external systems. Cultivate data expertise and stewardship: Develop a strong understanding of our data strategy, actively participating in data governance initiatives and acting as a data steward to ensure data quality, accuracy, and responsible use. Qualifications - To perform this job successfully, an individual must be able to perform each job duty satisfactorily. The requirements listed below are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions. Detailed Job Duties: ( typical monthly, weekly, daily tasks which support the primary objectives ) Visualize insights: Create and maintain dashboards and reports that effectively communicate key business metrics and trends. Maintain and enhance data feeds to external partners, ensuring accurate and timely synchronization of business information Data Ingestion: Ingest, transform, and integrate data from various partners Ad-hoc reporting: respond to data requests and perform exploratory analysis to support decision-making and identify opportunities. Champion data quality and integrity, ensuring compliance with data governance policies and best practices What Success Looks like: Success in the Analytics Engineer I role entails becoming a trusted data partner, empowering the organization with insightful visualizations and ensuring data reliability. You will bridge the gap between business needs and data insights, fostering a data-driven culture through effective collaboration and clear communication. By maintaining critical data pipelines and championing data quality, you'll ensure the integrity of our data ecosystem while continuously developing your skills to become a valuable data expert, ultimately contributing to the achievement of our strategic objectives. The MUST Haves: ( ex: skills, education, experience, certifications, licenses ) At least One (1) year of experience working with data in a technical or analytical role. Basic knowledge of SQL and at least 1 scripting language. Understanding of databases, both OLTP and OLAP Strong analytical and problem-solving skills, capable of managing complex BI projects and delivering solutions that meet business needs. Excellent communication and collaboration abilities, with a propensity for cross-functional teamwork and knowledge sharing. Continuous learner with a passion for staying current with industry best practices and emerging BI technologies. The NICE to Haves : Bachelor’s degree in Computer Science, Information Systems, Data Science, or related field; master’s degree or industry-specific certifications preferred. Experience with Snowflake, Sigma Computing, dbt, Airflow or python

Posted 2 days ago

Apply

0 years

1 - 7 Lacs

Gurgaon

On-site

Overview: The Analytics Engineer I plays a key role in supporting the organization's BI systems and data platforms. This individual will focus on learning and assisting with tasks related to data quality, operational efficiency, and real-time analytics. They will work under the guidance of experienced BI Engineers to implement and maintain data ingestion pipelines, monitoring systems, and reporting solutions. This role offers a great opportunity to gain hands-on experience in BI tools like Snowflake Data Cloud, Sigma Computing and develop a strong foundation in data. Prodege: A cutting-edge marketing and consumer insights platform, Prodege has charted a course of innovation in the evolving technology landscape by helping leading brands, marketers, and agencies uncover the answers to their business questions, acquire new customers, increase revenue, and drive brand loyalty & product adoption. Bolstered by a major investment by Great Hill Partners in Q4 2021 and strategic acquisitions of Pollfish, BitBurst & AdGate Media in 2022, Prodege looks forward to more growth and innovation to empower our partners to gather meaningful, rich insights and better market to their target audiences. As an organization, we go the extra mile to “Create Rewarding Moments” every day for our partners, consumers, and team. Come join us today! Primary Objectives: Elicit and translate business needs: Collaborate with stakeholders to understand their data and reporting requirements, translating them into actionable technical specifications. Transform data into insights: Assist with designing and developing clear dashboards and visualizations, effectively communicating key business metrics and trends. Ensure data integrity: Maintain and optimize data pipelines ensuring accurate and timely delivery of data to external systems. Cultivate data expertise and stewardship: Develop a strong understanding of our data strategy, actively participating in data governance initiatives and acting as a data steward to ensure data quality, accuracy, and responsible use. Qualifications - To perform this job successfully, an individual must be able to perform each job duty satisfactorily. The requirements listed below are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions. Detailed Job Duties: ( typical monthly, weekly, daily tasks which support the primary objectives ) Visualize insights: Create and maintain dashboards and reports that effectively communicate key business metrics and trends. Maintain and enhance data feeds to external partners, ensuring accurate and timely synchronization of business information Data Ingestion: Ingest, transform, and integrate data from various partners Ad-hoc reporting: respond to data requests and perform exploratory analysis to support decision-making and identify opportunities. Champion data quality and integrity, ensuring compliance with data governance policies and best practices What Success Looks like: Success in the Analytics Engineer I role entails becoming a trusted data partner, empowering the organization with insightful visualizations and ensuring data reliability. You will bridge the gap between business needs and data insights, fostering a data-driven culture through effective collaboration and clear communication. By maintaining critical data pipelines and championing data quality, you'll ensure the integrity of our data ecosystem while continuously developing your skills to become a valuable data expert, ultimately contributing to the achievement of our strategic objectives. Qualifications The MUST Haves: ( ex: skills, education, experience, certifications, licenses ) At least One (1) year of experience working with data in a technical or analytical role. Basic knowledge of SQL and at least 1 scripting language. Understanding of databases, both OLTP and OLAP Strong analytical and problem-solving skills, capable of managing complex BI projects and delivering solutions that meet business needs. Excellent communication and collaboration abilities, with a propensity for cross-functional teamwork and knowledge sharing. Continuous learner with a passion for staying current with industry best practices and emerging BI technologies. The NICE to Haves : Bachelor’s degree in Computer Science, Information Systems, Data Science, or related field; master’s degree or industry-specific certifications preferred. Experience with Snowflake, Sigma Computing, dbt, Airflow or python

Posted 2 days ago

Apply

15.0 years

0 Lacs

Bhubaneshwar

On-site

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Python (Programming Language), Apache Airflow Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing innovative solutions, and ensuring that applications are aligned with business objectives. You will engage in problem-solving activities, participate in team meetings, and contribute to the overall success of projects by leveraging your expertise in application development. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Mentor junior team members to enhance their skills and knowledge. - Continuously evaluate and improve application performance and user experience. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Good To Have Skills: Experience with Apache Airflow, Python (Programming Language). - Strong understanding of data integration and ETL processes. - Experience with cloud-based data solutions and architectures. - Familiarity with data governance and management best practices. Additional Information: - The candidate should have minimum 5 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Kolkata office. - A 15 years full time education is required. 15 years full time education

Posted 2 days ago

Apply

6.0 years

0 Lacs

India

Remote

Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

3.0 - 5.0 years

1 - 4 Lacs

India

On-site

Job Overview : We are looking for a highly organized, proactive, and dedicated Executive Assistant (EA) to support our Founders. The ideal candidate will be a reliable, detail-oriented professional who can manage multiple priorities and perform a variety of administrative tasks to ensure the smooth running of daily operations. Key Responsibilities: Assist the executive team, focusing on efficiency and innovation in airflow technology. Proper follow up on tasks assigned to the team and external agencies on behalf of the Founders. Proactive and smart to handle various tasks in hand. Manage bilingual communications (Hindi and English) with internal teams and global partners. Organize meetings, including technology demos and sustainability briefings. Handle sensitive information related to product development and intellectual property. Support project management in areas like product launch and eco-friendly initiatives. Arrange travel and logistics for industry conferences and green technology events. Facilitate collaboration across departments to promote Karban's mission and values. Requirements: Should have 3-5 years of experience as an assistant or secretary to the MD of a company. Graduation from any good college. Excellent follow up skills. Should have working knowledge of MS OFFICE especially EXCEL & Word. Proficiency in Hindi and English, with excellent communication skills. Keen interest in technology, sustainability, and innovation. Strong organizational skills, with a flair for managing creative projects. Previous experience in a tech or environmentally focused company is a plus. Preferred: Married female candidates. Honesty and integrity to the work. Should be open to work on the personal tasks given by the Manager. Working Conditions: Full-time position with standard working hours i.e. 9 AM - 6 PM, Monday to Saturday. What we offer: Competitive salary & benefits. Professional growth opportunities. Inclusive & collaborative culture. Sick, casual and privilege leaves. Job Type: Full-time Pay: ₹180,000.00 - ₹420,000.00 per year Benefits: Paid sick time Paid time off Schedule: Day shift Experience: Assistant: 3 years (Required) Work Location: In person

Posted 2 days ago

Apply

5.0 years

3 - 10 Lacs

Jaipur

On-site

Role: QA Automation Engineer Employment: Full Time Experience: 5 To 10 Years Salary: Not Disclosed Location: Jaipur, India Programmers.IO is currently looking to hire QA Automation Engineer on Automation Tools like Selenium, Test NG, Scripting Skills like Python & Java, telecommunications or RAN DataSystem Technology. If you think you are a good fit and willing to work from Jaipur, India location.Please apply with you resume or share your resume at anjali.shah@programmers.io Experience Required: 5 to 10 Years Job Overview The QA Automation Engineer will develop and execute automated testing frameworks for infrastructure. This role ensures the quality and reliability of data pipelines, cloud configurations, and RAN data ingestion processes through automated testing. Responsibilities Design and implement automated testing frameworks for data pipelines and infrastructure. Develop test scripts for Databricks, Snowflake, and Airflow-based systems. Execute and monitor automated tests to ensure system reliability. Collaborate with developers to identify and resolve defects. Maintain and update test cases based on project requirements. Document test results and quality metrics. Qualifications Bachelor’s degree in Computer Science, Engineering, or a related field. 3+ years of experience in QA automation or software testing. Proficiency in automation tools (e.g., Selenium, TestNG, or similar). Experience with testing data pipelines or cloud-based systems. Strong scripting skills in Python or Java. Must be located in India and eligible to work. Preferred Skills Experience in telecommunications or RAN data systems. Familiarity with Databricks, Snowflake, or Airflow. Knowledge of CI/CD pipelines and Git. Certifications in QA or automation testing (e.g., ISTQB). Skills and Knowledge: Automation Tools like Selenium, Test NG, Scripting Skills like Python & Java, telecommunications or RAN DataSystem

Posted 2 days ago

Apply

6.0 years

0 Lacs

Ghaziabad, Uttar Pradesh, India

Remote

Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

6.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

6.0 years

0 Lacs

Agra, Uttar Pradesh, India

Remote

Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Key Responsibilities Designed and developed scalable ETL pipelines using Cloud Functions, Cloud Dataproc (Spark), and BigQuery as the central data warehouse for large-scale batch and transformation workloads. Implemented efficient data modeling techniques in BigQuery (including star/snowflake schemas, partitioning, and clustering) to support high-performance analytics and reduce query costs. Built end-to-end ingestion frameworks leveraging Cloud Pub/Sub and Cloud Functions for real-time and event-driven data capture. Used Apache Airflow (Cloud Composer) for orchestration of complex data workflows and dependency management. Applied Cloud Data Fusion and Datastream selectively for integrating specific sources (e.g., databases and legacy systems) into the pipeline. Developed strong backtracking and troubleshooting workflows to quickly identify data issues, job failures, and pipeline bottlenecks, ensuring consistent data delivery and SLA compliance. Integrated robust monitoring, alerting, and logging to ensure data quality, integrity, and observability. Tech stack GCP: BigQuery, Cloud Functions, Cloud Dataproc (Spark), Pub/Sub, Data Fusion, Datastream Orchestration: Apache Airflow (Cloud Composer) Languages: Python, SQL, PySpark Concepts: Data Modeling, ETL/ELT, Streaming & Batch Processing, Schema Management, Monitoring & Logging Some of the most important data sources: (need to know ingestion technique on these) CRM Systems (cloud-based and internal) Salesforce Teradata MySQL API Other 3rd-party and internal operational systems Skills: etl/elt,cloud data fusion,schema management,sql,pyspark,cloud dataproc (spark),monitoring & logging,data modeling,bigquery,etl,cloud pub/sub,python,gcp,bigquerry,streaming & batch processing,datastream,cloud functions,spark,apache airflow (cloud composer)

Posted 2 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Hi Role - Azure Data Engineer Location - Chennai, Gurugram (Onsite 3days in a week) Shift Timing - 2pm to 11PM Experience - 3+ Notice Period - Immediate or 15days Notice period (Please don't apply More than 30days) Required Skills and Qualifications: Educational Background: Bachelor’s or Master’s degree in Computer Science, Information Technology, Data Science, or a related field. Certifications in Databricks, Azure, or related technologies are a plus. Technical Skills: o Proficiency in SQL for complex queries, database design, and optimization. o Strong experience with PySpark for data transformation and processing. o Hands-on experience with Databricks for building and managing big data solutions. o Familiarity with cloud platforms like Azure INNOVATION STARTS HERE o Knowledge of data warehousing concepts and tools (e.g., Snowflake, Redshift). o Experience with data versioning and orchestration tools like Git, Airflow, or Dagster. Solid understanding of Big Data ecosystems (Hadoop, Hive, etc.).

Posted 2 days ago

Apply

5.0 - 7.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

You are passionate about quality and how customers experience the products you test. You have the ability to create, maintain and execute test plans in order to verify requirements. As a Quality Engineer at Equifax, you will be a catalyst in both the development and the testing of high priority initiatives. You will develop and test new products to support technology operations while maintaining exemplary standards. As a collaborative member of the team, you will deliver QA services (code quality, testing services, performance engineering, development collaboration and continuous integration). You will conduct quality control tests in order to ensure full compliance with specified standards and end user requirements. You will execute tests using established plans and scripts; documents problems in an issues log and retest to ensure problems are resolved. You will create test files to thoroughly test program logic and verify system flow. You will identify, recommend and implement changes to enhance effectiveness of QA strategies. What You Will Do Independently develop scalable and reliable automated tests and frameworks for testing software solutions. Specify and automate test scenarios and test data for a highly complex business by analyzing integration points, data flows, personas, authorization schemes and environments Develop regression suites, develop automation scenarios, and move automation to an agile continuous testing model. Pro-actively and collaboratively taking part in all testing related activities while establishing partnerships with key stakeholders in Product, Development/Engineering, and Technology Operations. What Experience You Need Bachelor's degree in a STEM major or equivalent experience 5-7 years of software testing experience Able to create and review test automation according to specifications Ability to write, debug, and troubleshoot code in Java, Springboot, TypeScript/JavaScript, HTML, CSS Creation and use of big data processing solutions using Dataflow/Apache Beam, Bigtable, BigQuery, PubSub, GCS, Composer/Airflow, and others with respect to software validation Created test strategies and plans Led complex testing efforts or projects Participated in Sprint Planning as the Test Lead Collaborated with Product Owners, SREs, Technical Architects to define testing strategies and plans. Design and development of micro services using Java, Springboot, GCP SDKs, GKE/Kubeneties Deploy and release software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs Cloud Certification Strongly Preferred What Could Set You Apart An ability to demonstrate successful performance of our Success Profile skills, including: Attention to Detail - Define test case candidates for automation that are outside of product specifications. i.e. Negative Testing; Create thorough and accurate documentation of all work including status updates to summarize project highlights; validating that processes operate properly and conform to standards Automation - Automate defined test cases and test suites per project Collaboration - Collaborate with Product Owners and development team to plan and and assist with user acceptance testing; Collaborate with product owners, development leads and architects on functional and non-functional test strategies and plans Execution - Develop scalable and reliable automated tests; Develop performance testing scripts to assure products are adhering to the documented SLO/SLI/SLAs; Specify the need for Test Data types for automated testing; Create automated tests and tests data for projects; Develop automated regression suites; Integrate automated regression tests into the CI/CD pipeline; Work with teams on E2E testing strategies and plans against multiple product integration points Quality Control - Perform defect analysis, in-depth technical root cause analysis, identifying trends and recommendations to resolve complex functional issues and process improvements; Analyzes results of functional and non-functional tests and make recommendation for improvements; Performance / Resilience: Understanding application and network architecture as inputs to create performance and resilience test strategies and plans for each product and platform. Conducting the performance and resilience testing to ensure the products meet SLAs / SLOs Quality Focus - Review test cases for complete functional coverage; Review quality section of Production Readiness Review for completeness; Recommend changes to existing testing methodologies for effectiveness and efficiency of product validation; Ensure communications are thorough and accurate for all work documentation including status and project updates Risk Mitigation - Work with Product Owners, QE and development team leads to track and determine prioritization of defects fixes

Posted 2 days ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Role: Data Engineer Experience: 7+ Years Mode: Hybrid Key Responsibilities: • Design and implement enterprise-grade Data Lake solutions using AWS (e.g., S3, Glue, Lake Formation). • Define data architecture patterns, best practices, and frameworks for handling large-scale data ingestion, storage, computing and processing. • Optimize cloud infrastructure for performance, scalability, and cost-effectiveness. • Develop and maintain ETL pipelines using tools such as AWS Glue or similar platforms. CI/CD Pipelines managing in DevOps. • Create and manage robust Data Warehousing solutions using technologies such as Redshift. • Ensure high data quality and integrity across all pipelines. • Design and deploy dashboards and visualizations using tools like Tableau, Power BI, or Qlik. • Collaborate with business stakeholders to define key metrics and deliver actionable insights. • Implement best practices for data encryption, secure data transfer, and role-based access control. • Lead audits and compliance certifications to maintain organizational standards. • Work closely with cross-functional teams, including Data Scientists, Analysts, and DevOps engineers. • Mentor junior team members and provide technical guidance for complex projects. • Partner with stakeholders to define and align data strategies that meet business objectives. Qualifications & Skills: • Strong experience in building Data Lakes using AWS Cloud Platforms Tech Stack. • Proficiency with AWS technologies such as S3, EC2, Glue/Lake Formation (or EMR), Quick sight, Redshift, Athena, Airflow (or) Lambda + Step Functions + Event Bridge, Data and IAM. • Expertise in AWS tools that includes Data Lake Storage, Compute, Security and Data Governance. • Advanced skills in ETL processes, SQL (like Cloud SQL, Aurora, Postgres), NoSQL DB’s (like DynamoDB, MongoDB, Cassandra) and programming languages (e.g., Python, Spark, or Scala). Real-time streaming applications preferably in Spark, Kafka, or other streaming platforms. • AWS Data Security: Good Understanding of security concepts such as: Lake formation, IAM, Service roles, • Encryption, KMS, Secrets Manager. • Hands-on experience with Data Warehousing solutions and modern architectures like Lakehouse’s or Delta Lake. Proficiency in visualization tools such as Tableau, Power BI, or Qlik. • Strong problem-solving skills and ability to debug and optimize application applications for performance. • Strong understanding of Database/SQL for database operations and data management. • Familiarity with CI/CD pipelines and version control systems like Git. • Strong understanding of Agile methodologies and working within scrum teams. Preferred Qualifications: • Bachelor of Engineering degree in Computer Science, Information Technology, or a related field. • AWS Certified Solutions Architect – Associate (required). • Experience with Agile/Scrum methodologies and design sprints.

Posted 2 days ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Role: QA Automation (QA) Location: India/Remote Need Immediate Joiner Job Overview Responsibilities Design and implement automated testing frameworks for data pipelines and infrastructure. Develop test scripts for Databricks, Snowflake, and Airflow-based systems. Execute and monitor automated tests to ensure system reliability. Collaborate with developers to identify and resolve defects. Maintain and update test cases based on project requirements. Document test results and quality metrics. Qualifications Bachelor’s degree in Computer Science, Engineering, or a related field. 3+ years of experience in QA automation or software testing. Proficiency in automation tools (e.g., Selenium, TestNG, or similar). Experience with testing data pipelines or cloud-based systems. Strong scripting skills in Python or Java. Must be located in India and eligible to work. Preferred Skills Experience in telecommunications or RAN data systems. Familiarity with Databricks, Snowflake, or Airflow. Knowledge of CI/CD pipelines and Git. Certifications in QA or automation testing (e.g., ISTQB). Please share your resume at Akhila.kadudhuri@programmers.io with current CTC, expected CTC, and notice period.

Posted 2 days ago

Apply

10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About the role We’re looking for Senior Engineering Manager to lead our Data / AI Platform and MLOps teams at slice. In this role, you’ll be responsible for building and scaling a high-performing team that powers data infrastructure, real-time streaming, ML enablement, and data accessibility across the company. You'll partner closely with ML, product, platform, and analytics stakeholders to build robust systems that deliver high-quality, reliable data at scale. You will drive AI initiatives to centrally build AP platform and apps which can be leveraged by various functions like legal, CX, product in a secured manner This is a hands-on leadership role perfect for someone who enjoys solving deep technical problems while growing people and teams. What You Will Do Lead and grow the data platform pod focused on all aspects of data (batch + real-time processing, ML platform, AI tooling, Business reporting, Data products – enabling product experience through data) Maintain hands-on technical leadership - lead by example through code reviews, architecture decisions, and direct technical contribution Partner closely with product and business stakeholders to identify data-driven opportunities and translate business requirements into scalable data solutions Own the technical roadmap for our data platform including infra modernization, performance, scalability, and cost efficiency Drive the development of internal data products like self-serve data access, centralized query layers, and feature stores Build and scale ML infrastructure with MLOps best practices including automated pipelines, model monitoring, and real-time inference systems Lead AI platform development for hosting LLMs, building secure AI applications, and enabling self-service AI capabilities across the organization Implement enterprise AI governance including model security, access controls, and compliance frameworks for internal AI applications Collaborate with engineering leaders across backend, ML, and security to align on long-term data architecture Establish and enforce best practices around data governance, access controls, and data quality Ensure regulatory compliance with GDPR, PCI-DSS, SOX through automated compliance monitoring and secure data pipelines Implement real-time data processing for fraud detection and risk management with end-to-end encryption and audit trails Coach engineers and team leads through regular 1:1s, feedback, and performance conversations What You Will Need 10+ years of engineering experience, including 2+ years managing data or infra teams with proven hands-on technical leadership Strong stakeholder management skills with experience translating business requirements into data solutions and identifying product enhancement opportunities Strong technical background in data platforms, cloud infrastructure (preferably AWS), and distributed systems Experience with tools like Apache Spark, Flink, EMR, Airflow, Trino/Presto, Kafka, and Kubeflow/Ray plus modern stack: dbt, Databricks, Snowflake, Terraform Hands on experience building AI/ML platforms including MLOps tools and experience with LLM hosting, model serving, and secure AI application development Proven experience improving performance, cost, and observability in large-scale data systems Expert-level cloud platform knowledge with container orchestration (Kubernetes, Docker) and Infrastructure-as-Code Experience with real-time streaming architectures (Kafka, Redpanda, Kinesis) Understanding of AI/ML frameworks (TensorFlow, PyTorch), LLM hosting platforms, and secure AI application development patterns Comfort working in fast-paced, product-led environments with ability to balance innovation and regulatory constraints Bonus: Experience with data security and compliance (PII/PCI handling), LLM infrastructure, and fintech regulations Life at slice Life so good, you’d think we’re kidding: Competitive salaries. Period. An extensive medical insurance that looks out for our employees & their dependents. We’ll love you and take care of you, our promise. Flexible working hours. Just don’t call us at 3AM, we like our sleep schedule. Tailored vacation & leave policies so that you enjoy every important moment in your life. A reward system that celebrates hard work and milestones throughout the year. Expect a gift coming your way anytime you kill it here. Learning and upskilling opportunities. Seriously, not kidding. Good food, games, and a cool office to make you feel like home. An environment so good, you’ll forget the term “colleagues can’t be your friends”.

Posted 2 days ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Title: Senior Google Cloud Platform (GCP) Data Engineer Location: Hybrid (Bengaluru, India) Job Type: Full-Time Experience Required: Minimum 6 Years Joining: Immediate or within 1 week About the Company: Tech T7 Innovations is a global IT solutions provider known for delivering cutting-edge technology services to enterprises across various domains. With a team of seasoned professionals, we specialize in software development, cloud computing, data engineering, machine learning, and cybersecurity. Our focus is on leveraging the latest technologies and best practices to create scalable, reliable, and secure solutions for our clients. Job Summary: We are seeking a highly skilled Senior GCP Data Engineer with over 6 years of experience in data engineering and extensive hands-on expertise in Google Cloud Platform (GCP). The ideal candidate must have a strong foundation in GCS, BigQuery, Apache Airflow/Composer, and Python, with a demonstrated ability to design and implement robust, scalable data pipelines in a cloud environment. Roles and Responsibilities: Design, develop, and deploy scalable and secure data pipelines using Google Cloud Platform components including GCS, BigQuery, and Airflow. Develop and manage robust ETL/ELT workflows using Python and integrate with orchestration tools such as Apache Airflow or Cloud Composer. Collaborate with data scientists, analysts, and business stakeholders to gather requirements and deliver reliable and efficient data solutions. Optimize BigQuery performance using best practices such as partitioning, clustering, schema design , and query tuning . Manage, monitor, and maintain data lake and data warehouse environments with high availability and integrity. Automate pipeline monitoring, error handling, and alerting mechanisms to ensure seamless and reliable data delivery . Contribute to architecture decisions involving data modeling, data flow, and integration strategies in a cloud-native environment. Ensure compliance with data governance , privacy, and security policies as per enterprise and regulatory standards. Mentor junior engineers and drive best practices in cloud engineering and data operations . Mandatory Skills: Google Cloud Platform (GCP): In-depth hands-on experience with GCS, BigQuery, IAM, and Cloud Functions. BigQuery (BQ): Expertise in large-scale analytics, schema optimization, and data modeling. Google Cloud Storage (GCS): Strong understanding of data lifecycle management, access controls, and best practices. Apache Airflow / Cloud Composer: Proficiency in writing and managing complex DAGs for data orchestration. Python Programming: Advanced skills in automation, API integration, and data processing using libraries like Pandas, PySpark, etc. Preferred Qualifications: Experience with CI/CD pipelines for data infrastructure and workflows. Exposure to other GCP services like Dataflow , Pub/Sub , and Cloud Functions . Familiarity with Infrastructure as Code (IaC) tools such as Terraform . Strong communication and analytical skills for problem-solving and stakeholder engagement. GCP Certifications (e.g., Professional Data Engineer) will be a significant advantage

Posted 2 days ago

Apply

5.0 years

0 Lacs

India

On-site

Coursera was launched in 2012 by Andrew Ng and Daphne Koller with a mission to provide universal access to world-class learning. It is now one of the largest online learning platforms in the world, with 183 million registered learners as of June 30, 2025 . Coursera partners with over 350 leading university and industry partners to offer a broad catalog of content and credentials, including courses, Specializations, Professional Certificates, and degrees. Coursera’s platform innovations enable instructors to deliver scalable, personalized, and verified learning experiences to their learners. Institutions worldwide rely on Coursera to upskill and reskill their employees, citizens, and students in high-demand fields such as GenAI, data science, technology, and business. Coursera is a Delaware public benefit corporation and a B Corp. Join us in our mission to create a world where anyone, anywhere can transform their life through access to education. We're seeking talented individuals who share our passion and drive to revolutionize the way the world learns. At Coursera, we are committed to building a globally diverse team and are thrilled to extend employment opportunities to individuals in any country where we have a legal entity. We require candidates to possess eligible working rights and have a compatible timezone overlap with their team to facilitate seamless collaboration. Coursera has a commitment to enabling flexibility and workspace choices for employees. Our interviews and onboarding are entirely virtual, providing a smooth and efficient experience for our candidates. As an employee, we enable you to select your main way of working, whether it's from home, one of our offices or hubs, or a co-working space near you. Job Overview: We are seeking a highly skilled and collaborative Data Scientist to join our team. Reporting to the Director of Data Science, you will work alongside and provide technical guidance to a subset of our Analytics and Insights group and provide support to a few business lines including Industry and University Partnerships, and Content/Credentials supporting Product, Marketing, Content, Finance, Services, and more. As a Data Scientist, you will influence strategies and roadmaps for business units within your purview through actionable insights. Your responsibilities will include forecasting content performance, informing content acquisition and prescribing improvements, addressing A/B testing setups and reporting, answering ad-hoc business questions, defining metrics and goals, building and managing dashboards, causal inference and ML modeling, supporting business event tracking and unification, and more. The ideal candidate will be a creative and collaborative Data Scientist who can proactively drive results in their areas of focus, and provide guidance and best practices around statistical modeling and experimentation, data analysis, and data quality. Responsibilities: As a Data Scientist, you will assume responsibility for guiding the planning, assessment, and execution of our content initiatives and the subsequent engagement and learner and student success. You will play a key role in identifying gaps in content and proposing ways to acquire new, targeted content or enhance existing content, in addition to creating and leveraging forecasts of content pre and post launch, defining KPIs and creating reports for measuring the impact of diverse tests, product improvements, and content releases. In this position, you will guide other Data Scientists and provide technical feedback throughout various stages of project development, including optimizing analytical models, reviewing code, creating dashboards, and conducting experimentation. You will examine and prioritize projects and provide leadership with insightful feedback on results. Your role will require that you analyze the connection between our consumer-focused business and our learner’s pathway to a degree, and optimize systems to effectively lead to the optimal outcome for our learners. In this position, you will collaborate closely with the data engineering and operations teams to ensure that we have the right content at the right time for our learners. You will also analyze prospect behavior to identify content acquisition and optimization opportunities that promote global growth in the Consumer and Degrees business, and assist content, marketing, and product managers in developing and measuring growth initiatives; resulting in more lives changed through learning. In this role, you’ll be directly involved in the planning, measurement, and evaluation of content , engagement, and customer success initiatives and experiments. Proactively identify gaps in content availability and recommend targeted content acquisition or improvement to existing content. Define and develop KPIs and create reports to measure impact of various tests, content releases, product improvements, etc. Mentor Data Scientists and offer technical guidance on projects; dashboard creation, troubleshooting code, statistical modeling, experimentation, and general analysis optimization. Run exploratory analyses, uncover new areas of opportunity, create and test hypotheses, develop dashboards, and assess the potential upside of a given opportunity. Work closely with our data engineering and operations teams to ensure that content funnel tracking supports product needs and maintain self-serve reporting tools. Advise content, marketing, and product managers on experimentation and measurement plans for key growth initiatives Basic Qualifications: Background in applied math, computer science, statistics, or a related technical field 5+ years of experience using data to advise product or business teams 2+ years of experience applying statistical inference techniques to business questions Excellent business intuition, cross-functional communication, and project management Strong applied statistics and data visualization skills Proficient with at least one scripting language (e.g. Python), one statistical software package (e.g. R, NumPy/SciPy/Pandas), and SQL Preferred Qualifications: Experience at EdTech or Content Subscription business Experience partnering with SaaS sales and/or marketing organizations Experience working with Salesforce and/or Marketo data Experience with Airflow, Databricks and/or LookerExperience with Amplitude If this opportunity interests you, you might like these courses on Coursera: Go Beyond the Numbers: Translate Data into Insights Applied AI with DeepLearning Probability & Statistics for Machine Learning & Data Science Coursera is an Equal Employment Opportunity Employer and considers all qualified applicants without regard to race, color, religion, sex, sexual orientation, gender identity, age, marital status, national origin, protected veteran status, disability, or any other legally protected class. If you are an individual with a disability and require a reasonable accommodation to complete any part of the application process, please contact us at accommodations@coursera.org. For California Candidates, please review our CCPA Applicant Notice here. For our Global Candidates, please review our GDPR Recruitment Notice here.

Posted 2 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies