Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
2.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About the Team Come help us build the world's most reliable on-demand, logistics engine for delivery! We're bringing on experienced engineers to help us further our 24x7, global infrastructure system that powers DoorDash's three-sided marketplace of consumers, merchants, and dashers. About the Role The Data Tools mission is to build robust data platforms and establish policies that guarantee the analytics data is of high quality, easily accessible/cataloged, and compliant with financial and privacy regulations, fostering trust and confidence in our data-driven decision-making process. We are building the Data Tools team in India and you will have an opportunity to be part of a founding team with a greater opportunity for impact where you can help grow the team and shape the roadmap for the data platform at DoorDash. You will report directly to the Data Tools Engineering Manager. You're excited about this opportunity because you will… Work on building a data discovery platform, privacy frameworks, unified access control frameworks, and data quality platform to enable data builders at DoorDash to deliver high-quality and trustable data sets and metrics Help accelerate the adoption of the data discovery platform by building integrations across online, analytics platforms and promoting self-serve Come up with solutions for scaling data systems for various business needs Collaborate in a dynamic startup environment We're excited about you because… B.E./B.Tech., M.E./M.Tech, or Ph.D. in Computer Science or equivalent 2+ years of experience with CS fundamental concepts and experience with at least one of the programming languages of Scala, Java, and Python Prior technical experience in Big Data infrastructure & governance - you've built meaningful pieces of data infrastructure. Bonus if those were open-sourced technologies like DataHub, Spark, Airflow, Kafka, Flink Experience improving efficiency, scalability, and stability of data platforms We use Covey as part of our hiring and / or promotional process for jobs in NYC and certain features may qualify it as an AEDT. As part of the evaluation process we provide Covey with job requirements and candidate submitted applications. We began using Covey Scout for Inbound on June 20, 2024. Please see the independent bias audit report covering our use of Covey here. About DoorDash At DoorDash, our mission to empower local economies shapes how our team members move quickly, learn, and reiterate in order to make impactful decisions that display empathy for our range of users—from Dashers to merchant partners to consumers. We are a technology and logistics company that started with door-to-door delivery, and we are looking for team members who can help us go from a company that is known for delivering food to a company that people turn to for any and all goods. DoorDash is growing rapidly and changing constantly, which gives our team members the opportunity to share their unique perspectives, solve new challenges, and own their careers. We're committed to supporting employees' happiness, healthiness, and overall well-being by providing comprehensive benefits and perks. Our Commitment to Diversity and Inclusion We're committed to growing and empowering a more inclusive community within our company, industry, and cities. That's why we hire and cultivate diverse teams of people from all backgrounds, experiences, and perspectives. We believe that true innovation happens when everyone has room at the table and the tools, resources, and opportunity to excel. If you need any accommodations, please inform your recruiting contact upon initial connection. About DoorDash At DoorDash, our mission to empower local economies shapes how our team members move quickly, learn, and reiterate in order to make impactful decisions that display empathy for our range of users—from Dashers to merchant partners to consumers. We are a technology and logistics company that started with door-to-door delivery, and we are looking for team members who can help us go from a company that is known for delivering food to a company that people turn to for any and all goods. DoorDash is growing rapidly and changing constantly, which gives our team members the opportunity to share their unique perspectives, solve new challenges, and own their careers. We're committed to supporting employees' happiness, healthiness, and overall well-being by providing comprehensive benefits and perks. Our Commitment to Diversity and Inclusion We're committed to growing and empowering a more inclusive community within our company, industry, and cities. That's why we hire and cultivate diverse teams of people from all backgrounds, experiences, and perspectives. We believe that true innovation happens when everyone has room at the table and the tools, resources, and opportunity to excel. If you need any accommodations, please inform your recruiting contact upon initial connection. We use Covey as part of our hiring and/or promotional process for jobs in certain locations. The Covey tool has been reviewed by an independent auditor. Results of the audit may be viewed here: https://getcovey.com/nyc-local-law-144 To request a reasonable accommodation under applicable law or alternate selection process, please inform your recruiting contact upon initial connection. Show more Show less
Posted 1 week ago
1.0 - 4.0 years
10 - 14 Lacs
Pune
Work from Office
Overview Design, develop, and maintain data pipelines and ETL/ELT processes using PySpark/Databricks/bigquery/Airflow/composer. Optimize performance for large datasets through techniques such as partitioning, indexing, and Spark optimization. Collaborate with cross-functional teams to resolve technical issues and gather requirements. Responsibilities Ensure data quality and integrity through data validation and cleansing processes. Analyze existing SQL queries, functions, and stored procedures for performance improvements. Develop database routines like procedures, functions, and views/MV. Participate in data migration projects and understand technologies like Delta Lake/warehouse/bigquery. Debug and solve complex problems in data pipelines and processes. Qualifications Bachelor’s degree in computer science, Engineering, or a related field. Strong understanding of distributed data processing platforms like Databricks and BigQuery. Proficiency in Python, PySpark, and SQL programming languages. Experience with performance optimization for large datasets. Strong debugging and problem-solving skills. Fundamental knowledge of cloud services, preferably Azure or GCP. Excellent communication and teamwork skills. Nice to Have: Experience in data migration projects. Understanding of technologies like Delta Lake/warehouse. What we offer you Transparent compensation schemes and comprehensive employee benefits, tailored to your location, ensuring your financial security, health, and overall wellbeing. Flexible working arrangements, advanced technology, and collaborative workspaces. A culture of high performance and innovation where we experiment with new ideas and take responsibility for achieving results. A global network of talented colleagues, who inspire, support, and share their expertise to innovate and deliver for our clients. Global Orientation program to kickstart your journey, followed by access to our Learning@MSCI platform, LinkedIn Learning Pro and tailored learning opportunities for ongoing skills development. Multi-directional career paths that offer professional growth and development through new challenges, internal mobility and expanded roles. We actively nurture an environment that builds a sense of inclusion belonging and connection, including eight Employee Resource Groups. All Abilities, Asian Support Network, Black Leadership Network, Climate Action Network, Hola! MSCI, Pride & Allies, Women in Tech, and Women’s Leadership Forum. At MSCI we are passionate about what we do, and we are inspired by our purpose – to power better investment decisions. You’ll be part of an industry-leading network of creative, curious, and entrepreneurial pioneers. This is a space where you can challenge yourself, set new standards and perform beyond expectations for yourself, our clients, and our industry. MSCI is a leading provider of critical decision support tools and services for the global investment community. With over 50 years of expertise in research, data, and technology, we power better investment decisions by enabling clients to understand and analyze key drivers of risk and return and confidently build more effective portfolios. We create industry-leading research-enhanced solutions that clients use to gain insight into and improve transparency across the investment process. MSCI Inc. is an equal opportunity employer. It is the policy of the firm to ensure equal employment opportunity without discrimination or harassment on the basis of race, color, religion, creed, age, sex, gender, gender identity, sexual orientation, national origin, citizenship, disability, marital and civil partnership/union status, pregnancy (including unlawful discrimination on the basis of a legally protected parental leave), veteran status, or any other characteristic protected by law. MSCI is also committed to working with and providing reasonable accommodations to individuals with disabilities. If you are an individual with a disability and would like to request a reasonable accommodation for any part of the application process, please email Disability.Assistance@msci.com and indicate the specifics of the assistance needed. Please note, this e-mail is intended only for individuals who are requesting a reasonable workplace accommodation; it is not intended for other inquiries. To all recruitment agencies MSCI does not accept unsolicited CVs/Resumes. Please do not forward CVs/Resumes to any MSCI employee, location, or website. MSCI is not responsible for any fees related to unsolicited CVs/Resumes. Note on recruitment scams We are aware of recruitment scams where fraudsters impersonating MSCI personnel may try and elicit personal information from job seekers. Read our full note on careers.msci.com
Posted 1 week ago
10.0 - 15.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title: Lead Data Engineer Location- All EXL Locations Experience- 10 to 15 Years Job Summary The Lead Data Engineer will provide technical expertise in analysis, design, development, rollout and maintenance of data integration initiatives. This role will contribute to implementation methodologies and best practices, as well as work on project teams to analyze, design, develop and deploy business intelligence / data integration solutions to support a variety of customer needs. This position oversees a team of Data Integration Consultants at various levels, ensuring their success on projects, goals, trainings and initiatives though mentoring and coaching. Provides technical expertise in needs identification, data modelling, data movement and transformation mapping (source to target), automation and testing strategies, translating business needs into technical solutions with adherence to established data guidelines and approaches from a business unit or project perspective whilst leveraging best fit technologies (e.g., cloud, Hadoop, NoSQL, etc.) and approaches to address business and environmental challenges Works with stakeholders to identify and define self-service analytic solutions, dashboards, actionable enterprise business intelligence reports and business intelligence best practices. Responsible for repeatable, lean and maintainable enterprise BI design across organizations. Effectively partners with client team. Leadership not only in the conventional sense, but also within a team we expect people to be leaders. Candidate should elicit leadership qualities such as Innovation, Critical thinking, optimism/positivity, Communication, Time Management, Collaboration, Problem-solving, Acting Independently, Knowledge sharing and Approachable. Responsibilities: Design, develop, test, and deploy data integration processes (batch or real-time) using tools such as Microsoft SSIS, Azure Data Factory , Databricks, Matillion, Airflow, Sqoop, etc. Create functional & technical documentation – e.g. ETL architecture documentation, unit testing plans and results, data integration specifications, data testing plans, etc. Provide a consultative approach with business users, asking questions to understand the business need and deriving the data flow, conceptual, logical, and physical data models based on those needs. Perform data analysis to validate data models and to confirm ability to meet business needs. May serve as project or DI lead, overseeing multiple consultants from various competencies Stays current with emerging and changing technologies to best recommend and implement beneficial technologies and approaches for Data Integration Ensures proper execution/creation of methodology, training, templates, resource plans and engagement review processes Coach team members to ensure understanding on projects and tasks, providing effective feedback (critical and positive) and promoting growth opportunities when appropriate. Coordinate and consult with the project manager, client business staff, client technical staff and project developers in data architecture best practices and anything else that is data related at the project or business unit levels Architect, design, develop and set direction for enterprise self-service analytic solutions, business intelligence reports, visualisations and best practice standards. Toolsets include but not limited to: SQL Server Analysis and Reporting Services, Microsoft Power BI, Tableau and Qlik. Work with report team to identify, design and implement a reporting user experience that is consistent and intuitive across environments, across report methods, defines security and meets usability and scalability best practices. Must have: Writing code in programming language & working experience in Python, Pyspark, Databricks, Scala or Similar Data Pipeline Development & Management Design, develop, and maintain ETL (Extract, Transform, Load) pipelines using AWS services like AWS Glue, AWS Data Pipeline, Lambda, and Step Functions . Implement incremental data processing using tools like Apache Spark (EMR), Kinesis, and Kafka. Work with AWS data storage solutions such as Amazon S3, Redshift, RDS, DynamoDB, and Aurora. Optimize data partitioning, compression, and indexing for efficient querying and cost optimization. Implement data lake architecture using AWS Lake Formation & Glue Catalog. Implement CI/CD pipelines for data workflows using Code Pipeline, Code Build, and GitHub Actions Good to have: Enterprise Data Modelling and Semantic Modelling & working experience in ERwin, ER/Studio, PowerDesigner or Similar Logical/Physical model on Big Data sets or modern data warehouse & working experience in ERwin, ER/Studio, PowerDesigner or Similar Agile Process (Scrum cadences, Roles, deliverables) & basic understanding in either Azure DevOps, JIRA or Similar. Key skills: key Skills: Python, Pyspark, AWS, Databricks, SQL. Show more Show less
Posted 1 week ago
0.0 - 8.0 years
0 Lacs
Pune, Maharashtra
On-site
Job Title: Cloud Data Engineer – GCP + Python Job Type: Full Time Industry: Banking & Finance Location: Pune, Maharashtra, India (Hybrid/On-site) Job Summary: We are hiring a skilled Cloud Data Engineer with expertise in Google Cloud Platform (GCP), Python, and advanced SQL . You'll work on building scalable, cloud-native data pipelines and automating data workflows for enterprise-scale analytics and banking projects. Key Responsibilities: Build and maintain robust ETL pipelines using Python and PySpark Develop data workflows using BigQuery, Cloud Composer, Dataflow , and Cloud Storage Write and optimize complex SQL queries for transformation and reporting Automate workflows with Airflow/Cloud Composer Collaborate with analysts, architects, and business teams Ensure code quality, reliability, and secure data practices Contribute to scalable, high-performance cloud data architecture Requirements: 5–8 years in Data Engineering or Cloud Data roles Strong hands-on experience with GCP services (BigQuery, Cloud Storage, Composer) Proficiency in Python and PySpark Advanced SQL skills and experience with CI/CD tools Working knowledge of workflow orchestration (Airflow preferred) Job Type: Full-time Pay: ₹1,200,000.00 - ₹2,000,000.00 per year Application Question(s): How many years of hands-on experience do you have with Google Cloud Platform (GCP) services such as BigQuery, Cloud Storage, or Dataflow? Are you proficient in Python for building ETL pipelines and automation workflows? Do you have prior experience working in the banking or financial services domain? Which orchestration tool(s) have you used in a production environment? Rate your expertise in writing and optimizing SQL for data transformation. Work Location: In person Application Deadline: 13/06/2025 Expected Start Date: 16/06/2025
Posted 1 week ago
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Summary We are seeking a highly experienced and strategic Lead Data Architect with 8+ years of hands-on experience in designing and leading data architecture initiatives. This individual will play a critical role in building scalable, secure, and high-performance data solutions that support enterprise-wide analytics, reporting, and operational systems. The ideal candidate will be both technically proficient and business-savvy, capable of translating complex data needs into innovative architecture designs. Key Responsibilities Design and implement enterprise-wide data architecture to support business intelligence, advanced analytics, and operational data needs. Define and enforce standards for data modeling, integration, quality, and governance. Lead the adoption and integration of modern data platforms (data lakes, data warehouses, streaming, etc.). Develop architecture blueprints, frameworks, and roadmaps aligned with business objectives. Ensure data security, privacy, and regulatory compliance (e.g., GDPR, HIPAA). Collaborate with business, engineering, and analytics teams to deliver high-impact data solutions. Provide mentorship and technical leadership to data engineers and junior architects. Evaluate emerging technologies and provide recommendations for future-state architectures. Required Qualifications 8+ years of experience in data architecture, data engineering, or a similar senior technical role. Bachelor's or Master’s degree in Computer Science, Information Systems, or a related field. Expertise in designing and managing large-scale data systems using cloud platforms (AWS, Azure, or GCP). Strong proficiency in data modeling (relational, dimensional, NoSQL) and modern database systems (e.g., Snowflake, BigQuery, Redshift). Hands-on experience with data integration tools (e.g., Apache NiFi, Talend, Informatica) and orchestration tools (e.g., Airflow). In-depth knowledge of data governance, metadata management, and data cataloging solutions. Experience with real-time and batch data processing frameworks, including streaming technologies like Kafka. Excellent leadership, communication, and cross-functional collaboration skills. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Join us as a Data Engineering Lead This is an exciting opportunity to use your technical expertise to collaborate with colleagues and build effortless, digital first customer experiences You’ll be simplifying the bank through developing innovative data driven solutions, inspiring to be commercially successful through insight, and keeping our customers’ and the bank’s data safe and secure Participating actively in the data engineering community, you’ll deliver opportunities to support our strategic direction while building your network across the bank We’re recruiting for multiple roles across a range to levels, up to and including experienced managers What you'll do We’ll look to you to demonstrate technical and people leadership to drive value for the customer through modelling, sourcing and data transformation. You’ll be working closely with core technology and architecture teams to deliver strategic data solutions, while driving Agile and DevOps adoption in the delivery of data engineering, leading a team of data engineers. We’ll Also Expect You To Be Working with Data Scientists and Analytics Labs to translate analytical model code to well tested production ready code Helping to define common coding standards and model monitoring performance best practices Owning and delivering the automation of data engineering pipelines through the removal of manual stages Developing comprehensive knowledge of the bank’s data structures and metrics, advocating change where needed for product development Educating and embedding new data techniques into the business through role modelling, training and experiment design oversight Leading and delivering data engineering strategies to build a scalable data architecture and customer feature rich dataset for data scientists Leading and developing solutions for streaming data ingestion and transformations in line with streaming strategy The skills you'll need To be successful in this role, you’ll need to be an expert level programmer and data engineer with a qualification in Computer Science or Software Engineering. You’ll also need a strong understanding of data usage and dependencies with wider teams and the end customer, as well as extensive experience in extracting value and features from large scale data. We'll also expect you to have knowledge of of big data platforms like Snowflake, AWS Redshift, Postgres, MongoDB, Neo4J and Hadoop, along with good knowledge of cloud technologies such as Amazon Web Services, Google Cloud Platform and Microsoft Azure You’ll Also Demonstrate Knowledge of core computer science concepts such as common data structures and algorithms, profiling or optimisation An understanding of machine-learning, information retrieval or recommendation systems Good working knowledge of CICD tools Knowledge of programming languages in data engineering such as Python or PySpark, SQL, Java, and Scala An understanding of Apache Spark and ETL tools like Informatica PowerCenter, Informatica BDM or DEI, Stream Sets and Apache Airflow Knowledge of messaging, event or streaming technology such as Apache Kafka Experience of ETL technical design, automated data quality testing, QA and documentation, data warehousing, data modelling and data wrangling Extensive experience using RDMS, ETL pipelines, Python, Hadoop and SQL Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Tamil Nadu, India
On-site
Senior Data Engineer - DBT and Snowflake Years of Experience : 5 Job location: Chennai Role Description: Data engineering role requires creating and managing technological infrastructure of a data platform, be in-charge / involved in architecting, building, and managing data flows / pipelines and construct data storages (noSQL, SQL), tools to work with big data (Hadoop, Kafka), and integration tools to connect sources or other databases. Should hold minimum 5 years of experience in DBT and Snowflake. Role Responsibility: Translate functional specifications and change requests into technical specifications Translate business requirement document, functional specification, and technical specification to related coding Develop efficient code with unit testing and code documentation Role Requirement: Proficient in basic and advanced SQL programming concepts (Procedures, Analytical functions etc.) Good Knowledge and Understanding of Data warehouse concepts (Dimensional Modeling, change data capture, slowly changing dimensions etc.) Knowledgeable in Shell / PowerShell scripting Knowledgeable in relational databases, non-relational databases, data streams, and file stores Knowledgeable in performance tuning and optimization Experience in Data Profiling and Data validation Experience in requirements gathering and documentation processes and performing unit testing Understanding and Implementing QA and various testing process in the project Additional Requirement: Design, develop, and maintain scalable data models and transformations using DBT in conjunction with Snowflake, ensure the effective transformation and load data from diverse sources into data warehouse or data lake. Implement and manage data models in DBT, guarantee accurate data transformation and alignment with business needs. Utilize DBT to convert raw, unstructured data into structured datasets, enabling efficient analysis and reporting. Write and optimize SQL queries within DBT to enhance data transformation processes and improve overall performance. Establish best DBT processes to improve performance, scalability, and reliability. Expertise in SQL and a strong understanding of Data Warehouse concepts and Modern Data Architectures. Familiarity with cloud-based platforms (e.g., AWS, Azure, GCP). Migrate legacy transformation code into modular DBT data models. #SeniorDataEngineer #DBTDeveloper #SnowflakeDeveloper #DBTJobs #SnowflakeJobs #ModernDataStack #SrDataEngineering #SeniorDataEngineer #ETLDeveloper #DataTransformation #SQL #Python #Airflow #Azure #AWS #GCP #Fivetran #Databricks #ADF #Glue #CloudData Show more Show less
Posted 1 week ago
7.0 years
0 Lacs
Mumbai, Maharashtra, India
Remote
Do you want to make a global impact on patient health? Join Pfizer Digital’s Artificial Intelligence, Data, and Advanced Analytics organization (AIDA) to leverage cutting-edge technology for critical business decisions and enhance customer experiences for colleagues, patients, and physicians. Our team is at the forefront of Pfizer’s transformation into a digitally driven organization, using data science and AI to change patients’ lives. The Data Science Industrialization team leads engineering efforts to advance AI and data science applications from POCs and prototypes to full production. As a Senior Manager, AI and Analytics Data Engineer, you will be part of a global team responsible for designing, developing, and implementing robust data layers that support data scientists and key advanced analytics/AI/ML business solutions. You will partner with cross-functional data scientists and Digital leaders to ensure efficient and reliable data flow across the organization. You will lead development of data solutions to support our data science community and drive data-centric decision-making. Join our diverse team in making an impact on patient health through the application of cutting-edge technology and collaboration. Role Responsibilities Lead development of data engineering processes to support data scientists and analytics/AI solutions, ensuring data quality, reliability, and efficiency As a data engineering tech lead, enforce best practices, standards, and documentation to ensure consistency and scalability, and facilitate related trainings Provide strategic and technical input on the AI ecosystem including platform evolution, vendor scan, and new capability development Act as a subject matter expert for data engineering on cross functional teams in bespoke organizational initiatives by providing thought leadership and execution support for data engineering needs Train and guide junior developers on concepts such as data modeling, database architecture, data pipeline management, data ops and automation, tools, and best practices Stay updated with the latest advancements in data engineering technologies and tools and evaluate their applicability for improving our data engineering capabilities Direct data engineering research to advance design and development capabilities Collaborate with stakeholders to understand data requirements and address them with data solutions Partner with the AIDA Data and Platforms teams to enforce best practices for data engineering and data solutions Demonstrate a proactive approach to identifying and resolving potential system issues. Communicate the value of reusable data components to end-user functions (e.g., Commercial, Research and Development, and Global Supply) and promote innovative, scalable data engineering approaches to accelerate data science and AI work Basic Qualifications Bachelor's degree in computer science, information technology, software engineering, or a related field (Data Science, Computer Engineering, Computer Science, Information Systems, Engineering, or a related discipline). 7+ years of hands-on experience in working with SQL, Python, object-oriented scripting languages (e.g. Java, C++, etc..) in building data pipelines and processes. Proficiency in SQL programming, including the ability to create and debug stored procedures, functions, and views. Recognized by peers as an expert in data engineering with deep expertise in data modeling, data governance, and data pipeline management principles In-depth knowledge of modern data engineering frameworks and tools such as Snowflake, Redshift, Spark, Airflow, Hadoop, Kafka, and related technologies Experience working in a cloud-based analytics ecosystem (AWS, Snowflake, etc.) Familiarity with machine learning and AI technologies and their integration with data engineering pipelines Demonstrated experience interfacing with internal and external teams to develop innovative data solutions Strong understanding of Software Development Life Cycle (SDLC) and data science development lifecycle (CRISP) Highly self-motivated to deliver both independently and with strong team collaboration Ability to creatively take on new challenges and work outside comfort zone. Strong English communication skills (written & verbal) Preferred Qualifications Advanced degree in Data Science, Computer Engineering, Computer Science, Information Systems, or a related discipline (preferred, but not required) Experience in software/product engineering Experience with data science enabling technology, such as Dataiku Data Science Studio, AWS SageMaker or other data science platforms Familiarity with containerization technologies like Docker and orchestration platforms like Kubernetes. Experience working effectively in a distributed remote team environment Hands on experience working in Agile teams, processes, and practices Expertise in cloud platforms such as AWS, Azure or GCP. Proficiency in using version control systems like Git. Pharma & Life Science commercial functional knowledge Pharma & Life Science commercial data literacy Ability to work non-traditional work hours interacting with global teams spanning across the different regions (e.g.: North America, Europe, Asia) Pfizer is an equal opportunity employer and complies with all applicable equal employment opportunity legislation in each jurisdiction in which it operates. Information & Business Tech Show more Show less
Posted 1 week ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Calfus Calfus is a Silicon Valley headquartered software engineering and platforms company. The name Calfus finds its roots and ethos in the Olympic motto “Citius, Altius, Fortius – Communiter". Calfus seeks to inspire our team to rise faster, higher, stronger, and work together to build software at speed and scale. Our core focus lies in creating engineered digital solutions that bring about a tangible and positive impact on business outcomes. We stand for #Equity and #Diversity in our ecosystem and society at large. Connect with us at #Calfus and be a part of our extraordinary journey! Position Overview: As a Data Engineer – BI Analytics & DWH , you will play a pivotal role in designing and implementing comprehensive business intelligence solutions that empower our organization to make data-driven decisions. You will leverage your expertise in Power BI, Tableau, and ETL processes to create scalable architectures and interactive visualizations. This position requires a strategic thinker with strong technical skills and the ability to collaborate effectively with stakeholders at all levels. Key Responsibilities: BI Architecture & DWH Solution Design: Develop and design scalable BI Analytical & DWH Solution that meets business requirements, leveraging tools such as Power BI and Tableau. Data Integration: Oversee the ETL processes using SSIS to ensure efficient data extraction, transformation, and loading into data warehouses. Data Modelling: Create and maintain data models that support analytical reporting and data visualization initiatives. Database Management: Utilize SQL to write complex queries, stored procedures, and manage data transformations using joins and cursors. Visualization Development: Lead the design of interactive dashboards and reports in Power BI and Tableau, adhering to best practices in data visualization. Collaboration: Work closely with stakeholders to gather requirements and translate them into technical specifications and architecture designs. Performance Optimization: Analyse and optimize BI solutions for performance, scalability, and reliability. Data Governance: Implement best practices for data quality and governance to ensure accurate reporting and compliance. Team Leadership: Mentor and guide junior BI developers and analysts, fostering a culture of continuous learning and improvement. Azure Databricks: Leverage Azure Databricks for data processing and analytics, ensuring seamless integration with existing BI solutions. Qualifications : Bachelor’s degree in computer science, Information Systems, Data Science, or a related field. 6-15+ years of experience in BI architecture and development, with a strong focus on Power BI and Tableau. Proven experience with ETL processes and tools, especially SSIS. Strong proficiency in SQL Server, including advanced query writing and database management. Exploratory data analysis with Python Familiarity with the CRISP-DM model Ability to work with dierent data models like Familiarity with databases like Snowflake, Postgres, Redshift & Mongo DB Experience with visualization tools such as Power BI, Quick sight and Plotly and or Dash Strong programming foundation with Python with versatility to handle as : Data Manipulation and Analysis: using Pandas, NumPy & PySpark Data serialization & formats like JSON, CSV and Parquet & Pickle Database interaction to query cloud-based data warehouses Data Pipeline and ETL tools like Airflow for orchestrating workflows and, managing ETL pipelines: Scripting and automation . Cloud services & tools such as S3, AWS Lambda to manage cloud infrastructure. Azure SDK is a plus Code quality and management using version control and collaboration in data engineering projects Ability to interact with REST API’s and perform web scraping tasks is a plus Calfus Inc. is an Equal Opportunity Employer. That means we do not discriminate against any applicant for employment, or any employee because of age, colour, sex, disability, national origin, race, religion, or veteran status. All employment is decided based on qualifications, merit, and business need. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Summary: Analyzes the data needs of enterprise to build, optimize and maintain conceptual ML / Analytics models. Data scientist provides expertise in modeling & statistical approaches ranging from regression methods, decision trees, deep learning, NLP techniques, uplift modeling; statistical modeling such as multivariate techniques. Roles & Responsibilities : Design ML design and Ops stack considering the various trade-offs. Statistical Analysis and fundamentals MLOPS frameworks design and implementation Model Evaluation best practices -Train and retrain systems when necessary. Extend existing ML libraries and frameworks -Keep abreast of developments in the field. Act as a SME and tech lead / veteran for any data engineering question and manage data scientists and influence DS development across the company. Promote services, contribute to the identification of innovative initiatives within the Group, share information on new technologies in dedicated internal communities. Ensure compliance with policies related to Data Management and Data Protection Preferred Experience: Strong experience (3+ years) with Building statistical models, applying machine learning techniques Experience (3+ years) on Big Data technologies such as Hadoop, Spark, Airflow/Databricks Proven experience (3+ years) in solving complex problems with multi-layered data sets, as well as optimizing existing machine learning libraries and frameworks. Proven experience (3+ years) on innovation implementation from exploration to production: these may include containerization (i.e. Docker/Kubernetes), Big data (Hadoop, Spark) and MLOps platforms. Deep understanding of E2E software development in a team, and a track record of shipping software on time Ensure high-quality data and understand how data, which is generated out experimental design can produce actionable, trustworthy conclusions. Proficiency with SQL and NoSQL databases, data warehousing concepts, and cloud-based analytics database (e.g. Snowflake , Databricks or Redshift) administration Show more Show less
Posted 1 week ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Join us as a Data Engineering Lead This is an exciting opportunity to use your technical expertise to collaborate with colleagues and build effortless, digital first customer experiences You’ll be simplifying the bank through developing innovative data driven solutions, inspiring to be commercially successful through insight, and keeping our customers’ and the bank’s data safe and secure Participating actively in the data engineering community, you’ll deliver opportunities to support our strategic direction while building your network across the bank We’re recruiting for multiple roles across a range to levels, up to and including experienced managers What you'll do We’ll look to you to demonstrate technical and people leadership to drive value for the customer through modelling, sourcing and data transformation. You’ll be working closely with core technology and architecture teams to deliver strategic data solutions, while driving Agile and DevOps adoption in the delivery of data engineering, leading a team of data engineers. We’ll Also Expect You To Be Working with Data Scientists and Analytics Labs to translate analytical model code to well tested production ready code Helping to define common coding standards and model monitoring performance best practices Owning and delivering the automation of data engineering pipelines through the removal of manual stages Developing comprehensive knowledge of the bank’s data structures and metrics, advocating change where needed for product development Educating and embedding new data techniques into the business through role modelling, training and experiment design oversight Leading and delivering data engineering strategies to build a scalable data architecture and customer feature rich dataset for data scientists Leading and developing solutions for streaming data ingestion and transformations in line with streaming strategy The skills you'll need To be successful in this role, you’ll need to be an expert level programmer and data engineer with a qualification in Computer Science or Software Engineering. You’ll also need a strong understanding of data usage and dependencies with wider teams and the end customer, as well as extensive experience in extracting value and features from large scale data. We'll also expect you to have knowledge of of big data platforms like Snowflake, AWS Redshift, Postgres, MongoDB, Neo4J and Hadoop, along with good knowledge of cloud technologies such as Amazon Web Services, Google Cloud Platform and Microsoft Azure You’ll Also Demonstrate Knowledge of core computer science concepts such as common data structures and algorithms, profiling or optimisation An understanding of machine-learning, information retrieval or recommendation systems Good working knowledge of CICD tools Knowledge of programming languages in data engineering such as Python or PySpark, SQL, Java, and Scala An understanding of Apache Spark and ETL tools like Informatica PowerCenter, Informatica BDM or DEI, Stream Sets and Apache Airflow Knowledge of messaging, event or streaming technology such as Apache Kafka Experience of ETL technical design, automated data quality testing, QA and documentation, data warehousing, data modelling and data wrangling Extensive experience using RDMS, ETL pipelines, Python, Hadoop and SQL Show more Show less
Posted 1 week ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Required: Bachelor’s degree in computer science or engineering. 7+ years of experience with data analytics, data modeling, and database design. 5+ years of experience with Vertica. 2+ years of coding and scripting (Python, Java, Scala) and design experience. 2+ years of experience with Airflow. Experience with ELT methodologies and tools. Experience with GitHub. Expertise in tuning and troubleshooting SQL. Strong data integrity, analytical and multitasking skills. Excellent communication, problem solving, organizational and analytical skills. Able to work independently. Additional / preferred skills: Familiar with agile project delivery process. Knowledge of SQL and use in data access and analysis. Ability to manage diverse projects impacting multiple roles and processes. Able to troubleshoot problem areas and identify data gaps and issues. Ability to adapt to fast changing environment. Experience designing and implementing automated ETL processes. Experience with MicroStrategy reporting tool. Show more Show less
Posted 1 week ago
7.0 years
0 Lacs
India
On-site
Coursera was launched in 2012 by Andrew Ng and Daphne Koller, with a mission to provide universal access to world-class learning. It is now one of the largest online learning platforms in the world, with 175 million registered learners as of March 31, 2025. Coursera partners with over 350 leading universities and industry leaders to offer a broad catalog of content and credentials, including courses, Specializations, Professional Certificates, and degrees. Coursera’s platform innovations enable instructors to deliver scalable, personalized, and verified learning experiences to their learners. Institutions worldwide rely on Coursera to upskill and reskill their employees, citizens, and students in high-demand fields such as GenAI, data science, technology, and business. Coursera is a Delaware public benefit corporation and a B Corp. Join us in our mission to create a world where anyone, anywhere can transform their life through access to education. We're seeking talented individuals who share our passion and drive to revolutionize the way the world learns. At Coursera, we are committed to building a globally diverse team and are thrilled to extend employment opportunities to individuals in any country where we have a legal entity. We require candidates to possess eligible working rights and have a compatible timezone overlap with their team to facilitate seamless collaboration. Coursera has a commitment to enabling flexibility and workspace choices for employees. Our interviews and onboarding are entirely virtual, providing a smooth and efficient experience for our candidates. As an employee, we enable you to select your main way of working, whether it's from home, one of our offices or hubs, or a co-working space near you. Job Overview: At Coursera, our Data Science team is helping to build the future of education through data-driven decision making and data-powered products. We drive product and business strategy through measurement, experimentation, and causal inference to help Coursera deliver effective content discovery and personalized learning at scale. We believe the next generation of teaching and learning should be personalized, accessible, and efficient. With our scale, data, technology, and talent, Coursera and its Data Science team are positioned to make that vision a reality. We are seeking a highly skilled and collaborative Senior Data Scientist to join our Data Science team. In this role, you will report directly to the Director of Data Science and play a pivotal role in shaping our product strategy through data-driven insights and analytics. You will leverage your expertise in user behavior tracking, instrumentation, A/B testing, and advanced analytics techniques to gain a deep understanding of how users interact with our platform. Your insights will directly inform product development, enhance user experience, and drive engagement across various segments. Our ideal candidate possesses strong analytical skills, business acumen, and the ability to translate analysis into actionable recommendations that drive product improvements and user engagement. You should have excellent written and verbal communication skills. Collaborating closely with cross-functional teams—including product managers, designers, and engineers—you will ensure that data informs every aspect of product decision-making. Responsibilities: Design and implement instrumentation strategies for accurate tracking of user interactions and data collection. Develop and maintain data pipelines to ensure seamless data flow and accessibility for analysis. Analyze user behavior to provide actionable insights that inform product enhancements and drive user engagement. Conduct A/B testing and experimentation to evaluate the impact of product features and optimize user experience. Perform advanced analytics to uncover trends and patterns in user data, guiding product development decisions. Collaborate with product managers, designers, and engineers to define key performance indicators (KPIs) and assess the impact of product changes. Analyze user feedback and survey data to gain insights into user satisfaction and identify areas for improvement. Create interactive dashboards and reports to visualize data and communicate findings effectively to stakeholders. Leverage statistical analysis and predictive modeling to inform product roadmap and strategic decisions. Basic Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, Statistics, or a related technical field. 7+ years of experience using data to advise product or business teams, with a focus on product analytics. Strong SQL skills and advanced proficiency in statistical programming languages such as Python, along with experience using data manipulation libraries (e.g., Pandas, NumPy). Knowledge of data pipeline development and best practices in data management. Strong applied statistics skills, including experience with statistical inference techniques, predictive modeling and A/B testing methodologies. Intermediate proficiency in data visualization tools (e.g., Tableau, Power BI, Looker) and a willingness to learn new tools as needed. Excellent business intuition and project management abilities. Effective communication and presentation skills, with experience presenting to diverse teams and stakeholders, from individual contributors to executives. Preferred Qualifications: Familiarity with the educational technology sector, specifically with platforms like Coursera Experience with Airflow, Databricks and/or Looker Experience with Amplitude Coursera is an Equal Employment Opportunity Employer and considers all qualified applicants without regard to race, color, religion, sex, sexual orientation, gender identity, age, marital status, national origin, protected veteran status, disability, or any other legally protected class. If you are an individual with a disability and require a reasonable accommodation to complete any part of the application process, please contact us at accommodations@coursera.org. For California Candidates, please review our CCPA Applicant Notice here. For our Global Candidates, please review our GDPR Recruitment Notice here. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About the Company Resources is the backbone of Publicis Groupe, the world’s third-largest communications group. Formed in 1998 as a small team to service a few Publicis Groupe firms, Re:Sources has grown to 5,000+ people servicing a global network of prestigious advertising, public relations, media, healthcare, and digital marketing agencies. We provide technology solutions and business services including finance, accounting, legal, benefits, procurement, tax, real estate, treasury, and risk management to help Publicis Groupe agencies do their best: create and innovate for their clients. In addition to providing essential, everyday services to our agencies, Re:Sources develops and implements platforms, applications, and tools to enhance productivity, encourage collaboration, and enable professional and personal development. We continually transform to keep pace with our ever-changing communications industry and thrive on a spirit of innovation felt around the globe. With our support, Publicis Groupe agencies continue to create and deliver award-winning campaigns for their clients. About the Role The main purpose of this role is to advance the application of business intelligence, advanced data analytics, and machine learning for Marcel. The role involves working with other data scientists, engineers, and product owners to ensure the delivery of all commitments on time and in high quality. Responsibilities Develop and maintain robust Python-based backend services and RESTful APIs to support machine learning models in production. Deploy and manage containerized applications using Docker and orchestrate them using Azure Kubernetes Service (AKS). Implement and manage ML pipelines using MLflow for model tracking, reproducibility, and deployment. Design, schedule, and maintain automated workflows using Apache Airflow to orchestrate data and ML pipelines. Collaborate with Data Scientists to productize NLP models, with a focus on language models, embeddings, and text preprocessing techniques (e.g., tokenization, lemmatization, vectorization). Ensure high code quality and version control using Git; manage CI/CD pipelines for reliable deployment. Handle unstructured text data and build scalable backend infrastructure for inference and retraining workflows. Participate in system design and architecture reviews for scalable and maintainable machine learning services. Proactively monitor, debug, and optimize ML applications in production environments. Communicate technical solutions and project status clearly to team leads and product stakeholders. Qualifications Minimum Experience (relevant): 5 years Maximum Experience (relevant): 9 years Bachelor's degree in engineering, computer science, statistics, mathematics, information systems, or a related field from an accredited college or university; Master's degree from an accredited college or university is preferred. Or equivalent work experience. Required Skills Proficiency in Python and frameworks like FastAPI or Flask for building APIs. Solid hands-on experience with Docker, Kubernetes (AKS), and deploying production-grade applications. Familiarity with MLflow, including model packaging, logging, and deployment. Experience with Apache Airflow for orchestrating ETL and ML workflows. Understanding of NLP pipelines, language models (e.g., BERT, GPT variants), and associated libraries (e.g., spaCy, Hugging Face Transformers). Exposure to cloud environments, preferably Azure. Strong debugging, testing, and optimization skills for scalable systems. Experience working with large datasets and unstructured data, especially text. Preferred Skills Advanced knowledge of data science techniques, and experience building, maintaining, and documenting models. Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Experience building and optimizing ADF and PySpark based data pipelines, architectures and data sets on Graph and Azure Datalake. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Strong analytic skills related to working with unstructured datasets. Build processes supporting data transformation, data structures, metadata, dependency and workload management. A successful history of manipulating, processing and extracting value from large disconnected datasets. Working knowledge of message queuing, stream processing, and highly scalable Azure based data stores. Strong project management and organizational skills. Experience supporting and working with cross-functional teams in a dynamic environment. Understanding of Node.js is a plus, but not required. Show more Show less
Posted 1 week ago
8.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
This role is for one of the Weekday's clients Min Experience: 8 years Location: Bangalore, Mumbai JobType: full-time We are seeking a highly experienced and motivated Lead Data Engineer to join our data engineering team. This role is perfect for someone with 8-10 years of hands-on experience in designing and building scalable data infrastructure, data pipelines, and high-performance data platforms. You will lead a team of engineers, set data engineering standards, and work cross-functionally with data scientists, analysts, and software engineers to enable a data-driven culture within the organization. Requirements Key Responsibilities: Technical Leadership: Lead the design and development of robust, scalable, and high-performance data architectures, including batch and real-time data pipelines using modern technologies. Data Pipeline Development: Architect, implement, and maintain complex ETL/ELT workflows using tools like Apache Airflow, Spark, Kafka, or similar. Data Warehouse Management: Design and maintain cloud-based data warehouses and data lakes (e.g., Snowflake, Redshift, BigQuery, Delta Lake), ensuring optimized storage and query performance. Data Quality and Governance: Implement data validation, monitoring, and governance processes to ensure data accuracy, completeness, and security across all platforms. Collaboration: Work closely with stakeholders, including business analysts, data scientists, and application developers, to understand data needs and deliver effective solutions. Mentorship and Team Management: Guide and mentor junior and mid-level data engineers, foster best practices in code, architecture, and agile delivery. Automation and CI/CD: Develop and manage data pipeline deployment processes using DevOps and CI/CD principles. Required Skills & Qualifications: 8-10 years of proven experience in data engineering or a related field. Strong programming skills in Python, Scala, or Java. Expertise in building scalable and fault-tolerant ETL/ELT processes using frameworks such as Apache Spark, Kafka, Airflow, or similar. Hands-on experience with cloud platforms (AWS, GCP, or Azure) and tools like S3, Redshift, Snowflake, BigQuery, Glue, EMR, or Databricks. In-depth understanding of relational and NoSQL databases (PostgreSQL, MongoDB, Cassandra, etc.). Strong SQL skills with the ability to write complex and optimized queries. Familiarity with data modeling, data warehousing concepts, and OLAP/OLTP systems. Experience in deploying data services using containerization (Docker, Kubernetes) and CI/CD tools like Jenkins, GitHub Actions, or similar. Excellent communication skills with a collaborative and proactive attitude. Preferred Qualifications: Experience working in fast-paced, agile environments or startups. Exposure to machine learning pipelines, MLOps, or real-time analytics. Familiarity with data governance frameworks and data privacy regulations (GDPR, CCPA) Show more Show less
Posted 1 week ago
0 years
0 Lacs
Delhi, India
On-site
About the Role: We are looking for a visionary and hands-on Head of Business Data Analytics to lead our data-driven decision-making efforts. The ideal candidate will bring a blend of deep analytical expertise, business acumen, stakeholder engagement, and leadership skills to drive insights that contribute directly to the company’s growth and efficiency. Key Responsibilities: Strategic Analytics Leader: Drive enterprise-wide data strategy across product, marketing, finance, operations, and customer experience. Translate complex business problems into actionable insights through scalable analytics, forecasting tools, and ROI-focused models. Leadership & Project Delivery: Lead and mentor high-performing analytics and BI teams. Manage cross-functional projects globally, set and achieve OKRs, and foster a results-driven culture. Data Infrastructure & Engineering: Oversee design and optimization of data pipelines, lakes, and warehouses (e.g., Snowflake, Databricks ). Ensure data quality, governance, and automation in collaboration with engineering teams. Reporting & Insight Generation: Build and maintain real-time dashboards ( Tableau , Power BI , Metabase) and conduct deep-dive analyses— cohort analysis , churn prediction , LTV segmentation, attribution, and product usage funnels—to deliver actionable insights that drive executive decision-making. Requirements: Strong command of SQL , Python , and data modeling ( Star/Snowflake schema, SCD, normalization techniques) Proficient in BI and statistical tools such as Tableau, Power BI, Excel, and R Experienced in ETL development and orchestration tools (e.g., SSIS, Airflow) Proven track record of leading high-performing analytics teams ( 4–10+ members ) Industry experience in fast-paced, data-driven sectors like E-commerce, Health Tech, Retail, or BFSI. Strong stakeholder management skills, working closely with CXOs, product, operations, and engineering teams. Education & Certifications: • Bachelor’s or Master’s degree in Statistics, Mathematics, Computer Science, or Engineering. • MBA or Executive Education in Business Analytics/Data Strategy is preferred. • Relevant certifications (e.g., Databricks, Tableau, Power BI, Google Analytics) are a plus. Key Attributes and What We Offer Strategic mindset with strong execution and problem-solving capabilities Exceptional analytical, critical thinking, and decision-making skills Effective at prioritizing, delegating, and thriving in fast-paced, ambiguous environments Strong communication and leadership presentation skills Competitive salary with performance-based incentives Close collaboration with a passionate and visionary leadership team High-impact work in a dynamic, fast-moving environment Flexibility, continuous learning opportunities, and global exposure Show more Show less
Posted 1 week ago
1.0 - 3.0 years
1 - 3 Lacs
Mumbai, Maharashtra, India
On-site
We are seeking a skilledSr.Production Support Engineerto join our dynamic Engineering team. The ideal candidate will take ownership of debugging day-to-day issues, identifying root causes, improving broken processes, and ensuring the smooth operation of our systems. You will work closely with cross-functional teams to analyze, debug, and enhance system performance, contributing to a more efficient and reliable infrastructure. Key Responsibilities: Incident Debugging and Resolution: Investigate and resolve daily production issues, minimizing downtime and ensuring stability. Perform root cause analysis and implement solutions to prevent recurring issues. Data Analysis and Query Writing: Write and optimize custom queries forMySQL,Postgres,MongoDB,Redshift, or other data systems to debug processes and verify data integrity. Analyze system and application logs to identify bottlenecks or failures. Scripting and Automation: Develop and maintain custom Python scripts for data exports, data transformation, and debugging. Create automated solutions to address inefficiencies in broken processes. Process Improvement: Collaborate with engineering and operations teams to enhance system performance and reliability. Proactively identify and implement process improvements to optimize workflows. Collaboration: Act as the first point of contact for production issues, working closely with developers, QA teams, and other stakeholders. Document findings, resolutions, and best practices to build a knowledge base. Required Skills and Qualifications: Experience : 35 years of hands-on experience in debugging,Python scripting, and production support in a technical environment. Technical Proficiency: Strong experience withPythonfor scripting and automation withPandas. Proficient in writing and optimizing queries forMySQL,Postgres,MongoDB,Redshift, or similar databases. Familiarity with ETL pipelines, APIs, or data integration tools is a plus. Problem-Solving : Exceptional analytical and troubleshooting skills to quickly diagnose and resolve production issues. Process Improvement : Ability to identify inefficiencies and implement practical solutions to enhance system reliability and workflows. Communication : Excellent verbal and written communication skills for cross-functional collaboration and documentation. Nice-to-Have Skills: Exposure to tools likeAirflow,Pandas, orNumPyfor data manipulation and debugging. Familiarity with production monitoring tools likeNew Relic or Datadog. Experience with cloud platforms such as AWS, GCP, or Azure. Basic knowledge of CI/CD pipelines for deployment support.
Posted 1 week ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About the Company Resources is the backbone of Publicis Groupe, the world’s third-largest communications group. Formed in 1998 as a small team to service a few Publicis Groupe firms, Re:Sources has grown to 5,000+ people servicing a global network of prestigious advertising, public relations, media, healthcare, and digital marketing agencies. We provide technology solutions and business services including finance, accounting, legal, benefits, procurement, tax, real estate, treasury, and risk management to help Publicis Groupe agencies do their best: create and innovate for their clients. In addition to providing essential, everyday services to our agencies, Re:Sources develops and implements platforms, applications, and tools to enhance productivity, encourage collaboration, and enable professional and personal development. We continually transform to keep pace with our ever-changing communications industry and thrive on a spirit of innovation felt around the globe. With our support, Publicis Groupe agencies continue to create and deliver award-winning campaigns for their clients. About the Role The main purpose of this role is to advance the application of business intelligence, advanced data analytics, and machine learning for Marcel. The Data Scientist will work with other data scientists, engineers, and product owners to ensure the delivery of all commitments on time and in high quality. Responsibilities Design and develop advanced data science and machine learning algorithms, with a strong emphasis on Natural Language Processing (NLP) for personalized content, user understanding, and recommendation systems. Work on end-to-end LLM-driven features, including fine-tuning pre-trained models (e.g., BERT, GPT), prompt engineering, vector embeddings, and retrieval-augmented generation (RAG). Build robust models on diverse datasets to solve for semantic similarity, user intent detection, entity recognition, and content summarization/classification. Analyze user behaviour through data and derive actionable insights for platform feature improvements using experimentation (A/B testing, multivariate testing). Architect scalable solutions for deploying and monitoring language models within platform services, ensuring performance and interpretability. Collaborate cross-functionally with engineers, product managers, and designers to translate business needs into NLP/ML solutions. Regularly assess and maintain model accuracy and relevance through evaluation, retraining, and continuous improvement processes. Write clean, well-documented code in notebooks and scripts, following best practices for version control, testing, and deployment. Communicate findings and solutions effectively across stakeholders — from technical peers to executive leadership. Contribute to a culture of innovation and experimentation, continuously exploring new techniques in the rapidly evolving NLP/LLM space. Qualifications Minimum Experience (relevant): 3 years Maximum Experience (relevant): 5 years Required Skills Proficiency in Python and NLP frameworks: spaCy, NLTK, Hugging Face Transformers, OpenAI, LangChain. Strong understanding of LLMs, embedding techniques (e.g., SBERT, FAISS), RAG architecture, prompt engineering, and model evaluation. Experience in text classification, summarization, topic modeling, named entity recognition, and intent detection. Experience deploying ML models in production and working with orchestration tools such as Airflow, MLflow. Comfortable working in cloud environments (Azure preferred) and with tools such as Docker, Kubernetes (AKS), and Git. Strong experience working with data science/ML libraries in Python (SciPy, NumPy, TensorFlow, SciKit-Learn, etc.) Strong experience working in cloud development environments (especially Azure, ADF, PySpark, DataBricks, SQL) Experience building data science models for use on front end, user facing applications, such as recommendation models Experience with REST APIs, JSON, streaming datasets Understanding of Graph data, Neo4j is a plus Strong understanding of RDBMS data structure, Azure Tables, Blob, and other data sources Understanding of Jenkins, CI/CD processes using Git, for cloud configs and standard code repositories such as ADF configs and Databricks Preferred Skills Bachelor's degree in engineering, computer science, statistics, mathematics, information systems, or a related field from an accredited college or university; Master's degree from an accredited college or university is preferred. Or equivalent work experience. Advanced knowledge of data science techniques, and experience building, maintaining, and documenting models Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases preferably Graph DB. Experience building and optimizing ADF and PySpark based data pipelines, architectures and data sets on Graph and Azure Datalake. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Strong analytic skills related to working with unstructured datasets. Build processes supporting data transformation, data structures, metadata, dependency and workload management. A successful history of manipulating, processing and extracting value from large disconnected datasets. Strong project management and organizational skills. Experience supporting and working with cross-functional teams in a dynamic environment. Show more Show less
Posted 1 week ago
0.0 years
0 Lacs
Chennai, Tamil Nadu
On-site
Job Information Company Yubi Date Opened 06/04/2025 Job Type Full time Industry Technology City Chennai State/Province Tamil Nadu Country India Zip/Postal Code 600001 About Us Yubi stands for ubiquitous. But Yubi will also stand for transparency, collaboration, and the power of possibility. From being a disruptor in India’s debt market to marching towards global corporate markets from one product to one holistic product suite with seven products Yubi is the place to unleash potential. Freedom, not fear. Avenues, not roadblocks. Opportunity, not obstacles. Job Description Yubi, formerly known as CredAvenue, is re-defining global debt markets by freeing the flow of finance between borrowers, lenders, and investors. We are the world's possibility platform for the discovery, investment, fulfilment, and collection of any debt solution. At Yubi, opportunities are plenty and we equip you with tools to seize it. In March 2022, we became India’s fastest fintech and most impactful startup to join the unicorn club with a Series B fundraising round of $137 million. In 2020, we began our journey with a vision of transforming and deepening the global institutional debt market through technology. Our two-sided debt marketplace helps institutional and HNI investors find the widest network of corporate borrowers and debt products on one side and helps corporates to discover investors and access debt capital efficiently on the other side. Switching between platforms is easy, which means investors can lend, invest and trade bonds - all in one place. All 5 of our platforms shake up the traditional debt ecosystem and offer new ways of digital finance. Yubi Loans – Term loans and working capital solutions for enterprises. Yubi Invest – Bond issuance and investments for institutional and retail participants. Yubi Pool– End-to-end securitisations and portfolio buyouts. Yubi Flow – A supply chain platform that offers trade financing solutions. Yubi Co.Lend – For banks and NBFCs for co-lending partnerships. Currently, we have boarded over 4000+ corporates, 350+ investors and have facilitated debt volumes of over INR 40,000 crore. Backed by marquee investors like Insight Partners, B Capital Group, Dragoneer, Sequoia Capital, LightSpeed and Lightrock, we are the only-of-its-kind debt platform globally, revolutionising the segment. At Yubi, people are at the core of the business and our most valuable assets. Yubi is constantly growing, with 650+ like-minded individuals today, who are changing the way people perceive debt. We are a fun bunch who are highly motivated and driven to create a purposeful impact. Come, join the club to be a part of our epic growth story. Job description Design, build, and maintain scalable and reliable data pipelines for the ingestion, transformation, and delivery of large datasets. Collaborate with analytics and business teams to understand data requirements and deliver actionable datasets. Develop and optimize ETL processes using modern data engineering tools and frameworks (e.g., Apache Airflow, Spark, SQL). Ensure data quality, integrity, and security across all stages of the data lifecycle. Implement and monitor data solutions on cloud platforms (AWS, GCP, or Azure). Troubleshoot and resolve data pipeline and infrastructure issues with a focus on continuous improvement. Build and maintain data models, warehouses, and marts to support advanced analytics and reporting. Document data architecture, workflows, and processes for internal teams. Work closely with Data Scientists and Analysts to enable advanced analytics and machine learning initiatives. Stay updated with industry trends and best practices in data engineering and analytics. Requirements Experience & Expertise :
Posted 1 week ago
0.0 - 3.0 years
0 Lacs
Delhi, Delhi
On-site
Full time | Work From Office This Position is Currently Open Department / Category: DEVELOPER Listed on Jun 03, 2025 Work Location: NEW DELHI Job Descritpion of Data Bricks Developer 7+ Years Relevant Experience More than 3 years in data integration, pipeline development, and data warehousing, with a strong focus on AWS Databricks. Job Responsibilities: Administer, manage, and optimize the Databricks environment to ensure efficient data processing and pipeline development Perform advanced troubleshooting, query optimization, and performance tuning in a Databricks environment Collaborate with development teams to guide, optimize, and refine data solutions within the Databricks ecosystem Ensure high performance in data handling and processing, including the optimization of Databricks jobs and clusters Engage with and support business teams to deliver data and analytics projects effectively Manage source control systems and utilize Jenkins for continuous integration Actively participate in the entire software development lifecycle, focusing on data integrity and efficiency within Databricks Technical Skills: Proficiency in Databricks platform, management, and optimization Strong experience in AWS Cloud, particularly in data engineering and administration, with expertise in Apache Spark, S3, Athena, Glue, Kafka, Lambda, Redshift, and RDS Proven experience in data engineering performance tuning and analytical understanding in business and program contexts Solid experience in Python development, specifically in PySpark within the AWS Cloud environment, including experience with Terraform Knowledge of databases (Oracle, SQL Server, PostgreSQL, Redshift, MySQL, or similar) and advanced database querying Experience with source control systems (Git, Bitbucket) and Jenkins for build and continuous integration Understanding of continuous deployment (CI/CD) processes Experience with Airflow and additional Apache Spark knowledge is advantageous Exposure to ETL tools, including Informatica Required Skills for Data Bricks Developer Job AWS Databricks Databases CI/CD Constrol systems Our Hiring Process Screening (HR Round) Technical Round 1 Technical Round 2 Final HR Round
Posted 1 week ago
0.0 years
0 Lacs
Delhi
On-site
Job requisition ID :: 78129 Date: Jun 4, 2025 Location: Delhi Designation: Consultant Entity: Y our potential, unleashed. India’s impact on the global economy has increased at an exponential rate and Deloitte presents an opportunity to unleash and realize your potential amongst cutting edge leaders, and organizations shaping the future of the region, and indeed, the world beyond. At Deloitte, your whole self to work, every day. Combine that with our drive to propel with purpose and you have the perfect playground to collaborate, innovate, grow, and make an impact that matters. The Team Deloitte’s Technology & Transformation practice can help you uncover and unlock the value buried deep inside vast amounts of data. Our global network provides strategic guidance and implementation services to help companies manage data from disparate sources and convert it into accurate, actionable information that can support fact-driven decision-making and generate an insight-driven advantage. Our practice addresses the continuum of opportunities in business intelligence & visualization, data management, performance management and next-generation analytics and technologies, including big data, cloud, cognitive and machine learning. Your work profile: As a Analyst/Consultant/Senior Consultant in our T&T Team you’ll build and nurture positive working relationships with teams and clients with the intention to exceed client expectations: - Design, develop and deploy solutions using different tools, design principles and conventions. Configure robotics processes and objects using core workflow principles in an efficient way; ensure they are easily maintainable and easy to understand. Understand existing processes and facilitate change requirements as part of a structured change control process. Solve day to day issues arising while running robotics processes and provide timely resolutions. Maintain proper documentation for the solutions, test procedures and scenarios during UAT and Production phase. Coordinate with process owners and business to understand the as-is process and design the automation process flow. Desired Qualifications Good hands-on experience in GCP services including Big Query, Cloud Storage, Dataflow, Cloud Datapost, Cloud Composer/Airflow, and IAM. Must have proficient experience in GCP Databases: Bigtable, Spanner, Cloud SQL and Alloy DB Proficiency either in SQL, Python, Java, or Scala for data processing and scripting. Experience in development and test automation processes through the CI/CD pipeline (Git, Jenkins, SonarQube, Artifactory, Docker containers) Experience in orchestrating data processing tasks using tools like Cloud Composer or Apache Airflow. Strong understanding of data modeling, data warehousing and big data processing concepts. Solid understanding and experience of relational database concepts and technologies such as SQL, MySQL, PostgreSQL or Oracle. Design and implement data migration strategies for various database types ( PostgreSQL, Oracle, Alloy DB etc.) Deep understanding of at least 1 Database type with ability to write complex SQLs. Experience with NoSQL databases such as MongoDB, Scylla, Cassandra, or DynamoDB is a plus. Optimize data pipelines for performance and cost-efficiency, adhering to GCP best practices. Implement data quality checks, data validation, and monitoring mechanisms to ensure data accuracy and integrity. Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and translate them into technical solutions. Ability to work independently and manage multiple priorities effectively. Preferably having expertise in end to end DW implementation. Location and way of working: Base location: Bangalore, Mumbai, Delhi, Pune, Hyderabad This profile involves occasional travelling to client locations. Hybrid is our default way of working. Each domain has customized the hybrid approach to their unique needs. Your role as a Analyst/Consultant/Senior Consultant: We expect our people to embrace and live our purpose by challenging themselves to identify issues that are most important for our clients, our people, and for society. In addition to living our purpose, Analyst/Consultant/Senior Consultant across our organization must strive to be: Inspiring - Leading with integrity to build inclusion and motivation. Committed to creating purpose - Creating a sense of vision and purpose. Agile - Achieving high-quality results through collaboration and Team unity. Skilled at building diverse capability - Developing diverse capabilities for the future. Persuasive / Influencing - Persuading and influencing stakeholders. Collaborating - Partnering to build new solutions. Delivering value - Showing commercial acumen Committed to expanding business - Leveraging new business opportunities. Analytical Acumen - Leveraging data to recommend impactful approach and solutions through the power of analysis and visualization. Effective communication – Must be well abled to have well-structured and well-articulated conversations to achieve win-win possibilities. Engagement Management / Delivery Excellence - Effectively managing engagement(s) to ensure timely and proactive execution as well as course correction for the success of engagement(s) Managing change - Responding to changing environment with resilience Managing Quality & Risk - Delivering high quality results and mitigating risks with utmost integrity and precision Strategic Thinking & Problem Solving - Applying strategic mindset to solve business issues and complex problems. Tech Savvy - Leveraging ethical technology practices to deliver high impact for clients and for Deloitte Empathetic leadership and inclusivity - creating a safe and thriving environment where everyone's valued for who they are, use empathy to understand others to adapt our behaviours and attitudes to become more inclusive. How you’ll grow Connect for impact Our exceptional team of professionals across the globe are solving some of the world’s most complex business problems, as well as directly supporting our communities, the planet, and each other. Know more in our Global Impact Report and our India Impact Report. Empower to lead You can be a leader irrespective of your career level. Our colleagues are characterised by their ability to inspire, support, and provide opportunities for people to deliver their best and grow both as professionals and human beings. Know more about Deloitte and our One Young World partnership. Inclusion for all At Deloitte, people are valued and respected for who they are and are trusted to add value to their clients, teams and communities in a way that reflects their own unique capabilities. Know more about everyday steps that you can take to be more inclusive. At Deloitte, we believe in the unique skills, attitude and potential each and every one of us brings to the table to make an impact that matters. Drive your career At Deloitte, you are encouraged to take ownership of your career. We recognise there is no one size fits all career path, and global, cross-business mobility and up / re-skilling are all within the range of possibilities to shape a unique and fulfilling career. Know more about Life at Deloitte. Everyone’s welcome… entrust your happiness to us Our workspaces and initiatives are geared towards your 360-degree happiness. This includes specific needs you may have in terms of accessibility, flexibility, safety and security, and caregiving. Here’s a glimpse of things that are in store for you. Interview tips We want job seekers exploring opportunities at Deloitte to feel prepared, confident and comfortable. To help you with your interview, we suggest that you do your research, know some background about the organisation and the business area you’re applying to. Check out recruiting tips from Deloitte professionals.
Posted 1 week ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
hackajob is collaborating with American Express to connect them with exceptional tech professionals for this role. You Lead the Way. We’ve Got Your Back. At American Express, we know that with the right backing, people and businesses have the power to progress in incredible ways. Whether we’re supporting our customers’ financial confidence to move ahead, taking commerce to new heights, or encouraging people to explore the world, our colleagues are constantly redefining what’s possible — and we’re proud to back each other every step of the way. When you join #TeamAmex, you become part of a diverse community of over 60,000 colleagues, all with a common goal to deliver an exceptional customer experience every day. We back our colleagues with the support they need to thrive, professionally and personally. That’s why we have Amex Flex, our enterprise working model that provides greater flexibility to colleagues while ensuring we preserve the important aspects of our unique in-person culture. Depending on role and business needs, colleagues will either work onsite, in a hybrid model (combination of in-office and virtual days) or fully virtually. We are building an energetic, high-performance team with a nimble and creative mindset to drive our technology and products. American Express (AXP) is a powerful brand, a great place to work and has unparalleled scale. Join us for an exciting opportunity in the Marketing Technology within American Express Technologies. This team working on creating products for enhancing marketing targeting and eligibility capabilities that drive American Express marketing campaigns. Being part of the team, you will get numerous opportunities to utilize and learn bigdata and GCP cloud technologies. Job Responsibilities: Responsible for delivering the features or software functionality independently and reliably. Develop technical design documentation. Functions as core member of an agile team by contributing to software builds through consistent development practices with respect to tools, common components, and documentation. Participate in code reviews and automated testing. Helps other junior members of the team deliver. Demonstrates analytical thinking - recommends improvements, best practices and conducts experiments to prove/disprove them Provides continuous support for ongoing application availability. Learns, understands, participates fully in all team ceremonies, including work breakdown, estimation, and retrospectives. Willingness to learn new technologies and exploit them to their optimal potential, including substantiated ability to innovate and take pride in quickly deploying working software. High energy demonstrated, willingness to learn new technologies and takes pride in how fast they develop working software. Minimum Qualifications Bachelor's Degree Computer Science with 4+ years of overall software design and development experience. Experience with SQL, Adobe (AEP, CDP & AJO), React, CI/CD, Cucumber, Selenium Able to design reusable components and modules Familiarity with cloud platforms, ideally Google Cloud Platform (GCP) Working knowledge of data storage solutions like Big Query or Cloud SQL and data engineering tools like AirFlow or Cloud Workflows. Familiarity with Agile or other rapid application development methods. Hands on experience with one or more programming languages (JavaScript, Java, Python, Scala). Knowledge of various Shell Scripting tools Strong communication and analytical skills including effective presentation skills. Benefits We back our colleagues and their loved ones with benefits and programs that support their holistic well-being. That means we prioritize their physical, financial, and mental health through each stage of life. Benefits include: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations. Show more Show less
Posted 1 week ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
🚀 Why Headout? We're a rocketship: 9-figure revenue, record growth, and profitable With $130M in revenue, guests in 100+ cities, and 18 months of profitability, Headout is the fastest-growing marketplace in the travel industry, and we're just getting started. We've raised $60M+ from top-tier investors and are building a durable company for the long term — because that's what our mission needs and deserves. We're growing, profitable and nowhere near done. What we do is important In an increasingly digital world, there is a desperate need to augment our human experience by getting us to interact with the real world around us and the people in it. At Headout, our mission is to be the easiest, fastest, and most delightful way to head out to a real-life experience — from immersive tours to museums to live events and everything in between. Why now? The foundation is strong. The opportunity ahead is even bigger. We've hit profitability, built momentum, and proven the model — but there's so much more to build. If you're looking to join a company where the trajectory is steep and your impact is real, this is the moment. Our culture Reinventing the travel industry isn't easy, but that's the fun part. We care deeply about ownership, craft, and impact, and we're here to do the best work of our careers. We won't pretend like it's for everyone but if you're a builder who loves solving tough problems, you'll feel right at home. Read more about our unique values here: https://bit.ly/HeadoutPlaybook 👩💻 The Role As an Analytics Engineer at Headout, you will bridge the gap between data engineering and data analysis, creating the foundation for effective data-driven decision making across our organization. At Headout, we firmly believe that well-structured, reliable data is essential for understanding our business and delighting travelers worldwide. Working at the intersection of data infrastructure and business insights, you'll transform raw data into well-defined, documented, and accessible analytical models that empower teams throughout the company. Your technical expertise in data modeling and your understanding of business contexts will be crucial in building a robust analytics ecosystem that drives our global operations and strategic initiatives. 🌟 What makes the role stand out? Data Foundation Builder : Design and implement the core data models that power analytics across the organization. Your work will create a single source of truth that teams rely on for consistent, accurate insights. Technical Craftsmanship : Apply software engineering best practices to analytics development. From version control and testing to documentation and deployment, you'll bring discipline and quality to our data transformation processes. Business Domain Translator : Become fluent in both technical and business languages. You'll translate complex business concepts into clear data structures and metrics definitions that make sense to all stakeholders. Cross-Functional Enabler : Your work will empower diverse teams - from Operations and Marketing to Product and Finance - with the data they need to excel. The models you build will support everything from daily operational decisions to strategic business planning. Analytics Democratizer : Create self-service analytics capabilities that enable teams throughout the organization to answer their own questions without constantly requiring analyst support. Modern Stack Innovator : Work with cutting-edge data technologies to build scalable, efficient data solutions that grow with our business and adapt to changing requirements. 🎯 What skills you need to have You have a minimum of 2+ years of experience in analytics, data engineering, or related roles, with a focus on data modeling and transformation Strong SQL expertise is essential, with the ability to write complex queries and optimize data transformations for both accuracy and performance You possess a strong business acumen that helps you understand the context and importance of the metrics you're defining and modeling Experience with modern data transformation tools and methodologies (dbt, Airflow, Prefect or similar tools) and understanding of data modeling concepts (dimensional modeling, star schemas, etc.) You have a keen eye for data quality and testing methodologies to ensure reliable analytics outputs and are familiar with how to apply software development best practices to analytics code Your communication skills enable you to collaborate effectively with both technical and business stakeholders to translate requirements into well-structured data models Experience with data visualization tools (Looker, Tableau, Power BI) is beneficial for understanding the end-user needs Knowledge of Python or other programming languages is a plus for extending analytics capabilities beyond SQL EEO statement At Headout, we don't just accept differences — we celebrate it, we support it, and we thrive on it for the benefit of our employees, our partners, and the community at large. Headout provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age or disability. During the interview process, if you need assistance or an accommodation due to a disability, you may contact the recruiter assigned to your application or email us at life@headout.com. Privacy policy Please note that once you apply for this job profile your personal data will be retained for a period of one (1) year. Headout shall process this data for recruitment purposes only. Once the relevant job profile is filled or once the time period of one (1) year from the date of the job application has passed, whichever is later, Headout shall either delete your data or inform you that it shall keep it in its database for future roles. In compliance with the relevant privacy laws, you have the right to request access to your personal data, to request that your personal data be rectified or erased, and to request that the processing of your personal data be restricted. If you have any concerns or questions about the way Headout handle your data, you can contact our Data Protection Officer for more information. Show more Show less
Posted 1 week ago
6.0 - 8.0 years
35 - 40 Lacs
Chennai, Bengaluru
Hybrid
Work closely with the ML Architect to develop on ML frameworks (TensorFlow, Scikit-Learn, Pytorch) Strong background in MLOps practices, including CI/CD, containerization (Docker), Orchestration frameworks (Kubernetes, Airflow)
Posted 1 week ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Experienced with AWS, with a strong understanding of cloud services and infrastructure. Knowledgeable in Big Data concepts and experienced with AWS Glue, including setting up jobs, data cataloging, and managing crawlers. Proficient in using and maintaining Apache Airflow for workflow management and Terraform for infrastructure automation. Skilled in Python for scripting and automation tasks. Independent and proactive in solving problems and troubleshooting issues. Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2