Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 10.0 years
20 - 35 Lacs
Ahmedabad
Remote
We are seeking a talented and experienced Senior Data Engineer to join our team and contribute to building a robust data platform on Azure Cloud. The ideal candidate will have hands-on experience designing and managing data pipelines, ensuring data quality, and leveraging cloud technologies for scalable and efficient data processing. The Data Engineer will design, develop, and maintain scalable data pipelines and systems to support the ingestion, transformation, and analysis of large datasets. The role requires a deep understanding of data workflows, cloud platforms (Azure), and strong problem-solving skills to ensure efficient and reliable data delivery. Key Responsibilities Data Ingestion and Integration: Develop and maintain data ingestion pipelines using tools like Azure Data Factory , Databricks , and Azure Event Hubs . Integrate data from various sources, including APIs, databases, file systems, and streaming data. ETL/ELT Development: Design and implement ETL/ELT workflows to transform and prepare data for analysis and storage in the data lake or data warehouse. Automate and optimize data processing workflows for performance and scalability. Data Modeling and Storage: Design data models for efficient storage and retrieval in Azure Data Lake Storage and Azure Synapse Analytics . Implement best practices for partitioning, indexing, and versioning in data lakes and warehouses. Quality Assurance: Implement data validation, monitoring, and reconciliation processes to ensure data accuracy and consistency. Troubleshoot and resolve issues in data pipelines to ensure seamless operation. Collaboration and Documentation: Work closely with data architects, analysts, and other stakeholders to understand requirements and translate them into technical solutions. Document processes, workflows, and system configurations for maintenance and onboarding purposes. Cloud Services and Infrastructure: Leverage Azure services like Azure Data Factory , Databricks , Azure Functions , and Logic Apps to create scalable and cost-effective solutions. Monitor and optimize Azure resources for performance and cost management. Security and Governance: Ensure data pipelines comply with organizational security and governance policies. Implement security protocols using Azure IAM, encryption, and Azure Key Vault. Continuous Improvement: Monitor existing pipelines and suggest improvements for better efficiency, reliability, and scalability. Stay updated on emerging technologies and recommend enhancements to the data platform. Skills Strong experience with Azure Data Factory , Databricks , and Azure Synapse Analytics . Proficiency in Python , SQL , and Spark . Hands-on experience with ETL/ELT processes and frameworks. Knowledge of data modeling, data warehousing, and data lake architectures. Familiarity with REST APIs, streaming data (Kafka, Event Hubs), and batch processing. Good To Have: Experience with tools like Azure Purview , Delta Lake , or similar governance frameworks. Understanding of CI/CD pipelines and DevOps tools like Azure DevOps or Terraform . Familiarity with data visualization tools like Power BI . Competency Analytical Thinking Clear and effective communication Time Management Team Collaboration Technical Proficiency Supervising Others Problem Solving Risk Management Organizing & Task Management Creativity/innovation Honesty/Integrity Education: Bachelors degree in Computer Science, Data Science, or a related field. 8+ years of experience in a data engineering or similar role.
Posted 4 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Associate Managing Consultant – Performance Analytics-2 Associate Managing Consultant – Performance Analytics Advisors & Consulting Services Services within Mastercard is responsible for acquiring, engaging, and retaining customers by managing fraud and risk, enhancing cybersecurity, and improving the digital payments experience. We provide value-added services and leverage expertise, data-driven insights, and execution. Our Advisors & Consulting Services team combines traditional management consulting with Mastercard’s rich data assets, proprietary platforms, and technologies to provide clients with powerful strategic insights and recommendations. Our teams work with a diverse global customer base across industries, from banking and payments to retail and restaurants. The Advisors & Consulting Services group has five specializations: Strategy & Transformation, Performance Analytics, Business Experimentation, Marketing, and Program Management. Our Performance Analytics consultants translate data into insights by leveraging Mastercard and customer data to design, implement, and scale analytical solutions for customers. They use qualitative and quantitative analytical techniques and enterprise applications to synthesize analyses into clear recommendations and impactful narratives. Positions for different specializations and levels are available in separate job postings. Please review our consulting specializations to learn more about all opportunities and apply for the position that is best suited to your background and experience: https://careers.mastercard.com/us/en/consulting-specializations-at-mastercard Roles and Responsibilities Client Impact Manage deliverable development and workstreams on projects across a range of industries and problem statements Contribute to and/or develop analytics strategies and programs for large, regional, and global clients by leveraging data and technology solutions to unlock client value Manage working relationship with client managers, and act as trusted and reliable partner Create predictive models using segmentation and regression techniques to drive profits Review analytics end-products to ensure accuracy, quality and timeliness. Proactively seek new knowledge and structures project work to facilitate the capture of Intellectual Capital with minimal oversight Team Collaboration & Culture Develop sound business recommendations and deliver effective client presentations Plan, organize, and structure own work and that of junior project delivery consultants to identify effective analysis structures to address client problems and synthesize analyses into relevant findings Lead team and external meetings, and lead or co-lead project management Contribute to the firm's intellectual capital and solution development Grow from coaching to enable ownership of day-to-day project management across client projects, and mentor junior consultants Develop effective working relationships with local and global teams including business partners Qualifications Basic qualifications Undergraduate degree with data and analytics experience in business intelligence and/or descriptive, predictive, or prescriptive analytics Experience managing clients or internal stakeholders Ability to analyze large datasets and synthesize key findings to provide recommendations via descriptive analytics and business intelligence Knowledge of metrics, measurements, and benchmarking to complex and demanding solutions across multiple industry verticals Data and analytics experience such as working with data analytics software (e.g., Python, R, SQL, SAS) and building, managing, and maintaining database structures Advanced Word, Excel, and PowerPoint skills Ability to perform multiple tasks with multiple clients in a fast-paced, deadline-driven environment Ability to communicate effectively in English and the local office language (if applicable) Eligibility to work in the country where you are applying, as well as apply for travel visas as required by travel needs Preferred Qualifications Additional data and analytics experience working with Hadoop framework and coding using Impala, Hive, or PySpark or working with data visualization tools (e.g., Tableau, Power BI) Experience managing tasks or workstreams in a collaborative team environment Experience coaching junior delivery consultants Relevant industry expertise MBA or master’s degree with relevant specialization (not required) Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.
Posted 4 days ago
4.0 - 6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description 4-6 years of good hands on exposure with Big Data technologies – pySpark (Data frame and SparkSQL), Hadoop, and Hive Good hands on experience of python and Bash Scripts Good understanding of SQL and data warehouse concepts Strong analytical, problem-solving, data analysis and research skills Demonstrable ability to think outside of the box and not be dependent on readily available tools Excellent communication, presentation and interpersonal skills are a must Good to have: Hands-on experience with using Cloud Platform provided Big Data technologies (i.e. IAM, Glue, EMR, RedShift, S3, Kinesis) Orchestration with Airflow and Any job scheduler experience Experience in migrating workload from on-premise to cloud and cloud to cloud migrations Roles & Responsibilities Develop efficient ETL pipelines as per business requirements, following the development standards and best practices. Perform integration testing of different created pipeline in AWS env. Provide estimates for development, testing & deployments on different env. Participate in code peer reviews to ensure our applications comply with best practices. Create cost effective AWS pipeline with required AWS services i.e S3,IAM, Glue, EMR, Redshift etc.
Posted 4 days ago
8.0 - 15.0 years
0 Lacs
karnataka
On-site
This is a hands-on Databricks Senior Developer position within State Street Global Technology Services. We are seeking a candidate with a strong understanding of Bigdata technology and significant development expertise with Databricks. In this role, you will be responsible for managing the Databricks platform for the application, implementing enhancements, performance improvements, and AI/ML use cases, as well as leading a team. As a Databricks Sr. Developer, your responsibilities will include designing and developing custom high throughput and configurable frameworks/libraries. You should possess the ability to drive change through collaboration, influence, and the demonstration of proof of concepts. Additionally, you will be accountable for all aspects of the software development lifecycle, from design and coding to integration testing, deployment, and documentation. Collaboration within an agile project team is essential, and you must ensure that best practices and coding standards are adhered to by the team. Providing technical mentoring to the team and overseeing the ETL team are also key aspects of this role. To excel in this position, the following skills are highly valued: data analysis and data exploration experience, familiarity with agile delivery environments, hands-on development skills in Java, exposure to DevOps best practices and CICD (such as Jenkins), proficiency in working within a multi-developer environment using version control (e.g., Git), strong knowledge of Databricks SQL/Pyspark for data engineering pipelines, expertise in Unix, Python, and complex SQL, as well as strong critical thinking, communication, and problem-solving abilities. Troubleshooting DevOps pipelines and experience with AWS services are also essential. The ideal candidate will hold a Bachelor's degree in a computer or IT-related field, with at least 15 years of overall Bigdata data pipeline experience, 8+ years of hands-on experience with Databricks, and 8+ years of cloud-based development expertise, including AWS Services. Job ID: R-774606,
Posted 5 days ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: GCP Data Engineer Location: Chennai 34350 Type: Contract Budget: Up to ₹18 LPA Notice Period: Immediate joiners preferred 🧾 Job Description We are seeking an experienced Google Cloud Platform (GCP) Data Engineer to join our team in Chennai. This role is centered on designing and building cloud-based data solutions that support AI/ML, analytics, and business intelligence use cases. You will develop scalable and high-performance pipelines, integrate and transform data from various sources, and support both real-time and batch data needs. 🛠️ Key Responsibilities Design and implement scalable batch and real-time data pipelines using GCP services such as BigQuery, Dataflow, Dataform, Cloud Composer (Airflow), Data Fusion, Dataproc, Cloud SQL, Compute Engine, and others. Build data products that combine historical and live data for business insights and analytical applications. Lead efforts in data transformation, ingestion, integration, data mart creation, and activation of data assets. Collaborate with cross-functional teams including AI/ML, analytics, DevOps, and product teams to deliver robust cloud-native solutions. Optimize pipelines for performance, reliability, and cost-effectiveness. Contribute to data governance, quality assurance, and security best practices. Drive innovation by integrating AI/ML features, maintaining strong documentation, and applying continuous improvement strategies. Provide production support, troubleshoot failures, and meet SLAs using GCP’s monitoring tools. Work within an Agile environment, follow CI/CD practices, and apply test-driven development (TDD). ✅ Skills Required Strong experience in: BigQuery, Dataflow, Dataform, Data Fusion, Cloud SQL, Compute Engine, Dataproc, Airflow (Cloud Composer), Cloud Functions, Cloud Run Programming experience with Python, Java, PySpark, or Apache Beam Proficient in SQL (5+ years) for complex data handling Hands-on with Terraform, Tekton, Cloud Build, GitHub, Docker Familiarity with Apache Kafka, Pub/Sub, Kubernetes GCP Certified (Associate or Professional Data Engineer) ⭐ Skills Preferred Deep knowledge of cloud architecture and infrastructure-as-code tools Experience in data security, regulatory compliance, and data governance Experience with AI/ML solutions or platforms Understanding of DevOps pipelines, CI/CD using Cloud Build, and containerization Exposure to financial services data or similar regulated environments Experience in mentoring and leading engineering teams Tools: JIRA, Artifact Registry, App Engine 🎓 Education Required: Bachelor's Degree (in Computer Science, Engineering, or related field) Preferred: Master’s Degree 📌 Additional Details Role Type: Contract-based Work Location: Chennai, Onsite Target Candidates: Mid to Senior level with minimum 5+ years of data engineering experience Skills: gcp,apache,pyspark,data,docker
Posted 5 days ago
6.0 - 10.0 years
0 Lacs
noida, uttar pradesh
On-site
You will be responsible for leading a migration project from Oracle Cloud to MySQL database (on-premises) with a focus on SQL script and Python migration. Your expertise in on-premises solutions, specifically Linux-based standalone database, and connecting with multiple data sources will be crucial for the success of the project. Your core competencies should include proficiency in Big Data Technologies such as Hadoop (HDFS, YARN), Hive, and PySpark. You should have experience in designing end-to-end data ingestion & ETL pipelines, orchestrating and monitoring them efficiently. Additionally, your skills in cluster management, including primary/secondary cluster configuration, load balancing, and HA setup, will be essential. You should be well-versed in on-premises infrastructure, MySQL databases, and storage & management tools like HDFS and Hive Metastore. Your expertise in programming and scripting languages such as Python, PySpark, and SQL will be required for performing tasks related to performance optimization, Spark tuning, Hive query optimization, and resource management. Experience in building and deploying automated data pipelines in an on-premises environment using CI/CD practices will be advantageous. Furthermore, your knowledge of integrating with logging and monitoring tools, along with experience in security frameworks, will contribute to effective monitoring and governance of the data environment.,
Posted 5 days ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
As a Data Engineering Lead, you will collaborate with marketing, analytics, and business teams to understand data requirements and develop data solutions that address critical business inquiries. Your responsibilities will include leading the implementation and strategic optimization of tag management solutions such as Tealium and Google Tag Manager (GTM) to ensure precise and comprehensive data capture. You will leverage your expertise in Google Analytics 4 (GA4) to configure and customize data collection processes for enhanced insights. Additionally, you will architect scalable and performant data models on Google Cloud, utilizing BigQuery for data warehousing and analysis purposes. In this role, you will proficiently use SQL and scripting languages like JavaScript and HTML for data extraction, manipulation, and visualization. You will also play a pivotal role in mentoring and guiding a team of engineers, fostering a culture of collaboration and continuous improvement. Staying updated on the latest trends and technologies in data engineering and analytics, you will bring innovative ideas to the table and drive the deliverables by mentoring team members effectively. To qualify for this position, you must have experience with Tealium and tag management tools, along with a proven ability to use communication effectively to build positive relationships and drive project success. Your expertise in tag management solutions such as Tealium and GTM will be crucial for comprehensive website and app data tracking, including the implementation of scripting languages for Tag Extensions. Proficiency in Tealium concepts like IQ Tag Management, Audience Stream, Event Stream API Hub, Customer Data Hub, and Debugging tools is essential. Experience in utilizing Google Analytics 4 (GA4) for advanced data collection and analysis, as well as knowledge of Google Cloud, particularly Google BigQuery for data warehousing and analysis, will be advantageous. Preferred qualifications for this role include experience in a similar industry (e.g., retail, e-commerce, digital marketing), proficiency with Python/PySpark for data processing and analysis, working knowledge of Snowflake for data warehousing, experience with Airflow or similar workflow orchestration tools for managing data pipelines, and familiarity with AWS Cloud Technology. Additionally, skills in frontend technologies like React, JavaScript, and HTML, coupled with Python expertise for backend development, will be beneficial. Overall, as a Data Engineering Lead, you will play a critical role in designing robust data pipelines and architectures that support data-driven decision-making for websites and mobile applications, ensuring seamless data orchestration and processing through best-in-class ETL tools and technologies. Your expertise in Tealium, Google Analytics 4, and SQL will be instrumental in driving the success of data engineering initiatives within the organization.,
Posted 5 days ago
7.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role : Big Data Developer Location : Chennai Experience : 7+ years Work Mode : Work from Office Key Skills Required Google Cloud Platform (GCP) BigQuery (BQ) Dataflow Dataproc Cloud Spanner Strong knowledge of distributed systems, data processing frameworks, and big data architecture. Proficiency in programming languages like Python, Java, or Scala. Roles And Responsibilities BigQuery (BQ): Design and develop scalable data warehouses using BigQuery. Optimize SQL queries for performance and cost-efficiency in BigQuery. Implement data partitioning and clustering strategies. Dataflow: Build and maintain batch and streaming data pipelines using Apache Beam on GCP Dataflow. Ensure data transformation, enrichment, and cleansing as per business needs. Monitor and troubleshoot pipeline performance issues. Dataproc: Develop and manage Spark and Hadoop jobs on GCP Dataproc. Perform ETL/ELT operations using PySpark, Hive, or other tools. Automate and orchestrate jobs for scheduled data workflows. Cloud Spanner: Design and manage globally distributed, scalable transactional databases using Cloud Spanner. Optimize schema and query design for performance and reliability. Implement high availability and disaster recovery strategies. General Responsibilities: Collaborate with data architects, analysts, and business stakeholders to understand data requirements. Implement data quality and data governance best practices. Ensure security and compliance with GCP data handling standards. Participate in code reviews, CI/CD deployments, and Agile development cycles.
Posted 5 days ago
3.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) –Senior - Palantir Job Overview Big Data Developer/Senior Data Engineer with 3 to 6+ years of experience who would display strong analytical, problem-solving, programming, Business KPIs understanding and communication skills. They should be self-learner, detail-oriented team members who can consistently meet deadlines and possess the ability to work independently as needed. He/she must be able to multi-task and demonstrate the ability to work with a diverse work group of stakeholders for healthcare/Life Science/Pharmaceutical domains. Responsibilities And Duties Technical - Design, develop, and maintain data models, integrations, and workflows within Palantir Foundry. Detailed understanding and Hands-on knowledge of Palantir Solutions (e.g., Usecare, DTI, Code Repository, Pipeline Builder etc.) Analysing data within Palantir to extract insights for easy interpretation and Exploratory Data Analysis (e.g., Contour). Querying and Programming Skills: Utilizing programming languages query or scripts (e.g., Python, SQL) to interact with the data and perform analyses. Understanding relational data structures and data modelling to optimize data storage and retrieval based on OLAP engine principles. Distributed Frameworks with Automation using Spark APIs (e.g., PySpark, Spark SQL, RDD/DF) to automate processes and workflows within Palantir with external libraries (e.g., Pandas, NumPy etc.). API Integration: Integrating Palantir with other systems and applications using APIs for seamless data flow. Understanding of integration analysis, specification, and solution design based on different scenarios (e.g., Batch/Realtime Flow, Incremental Load etc.). Optimize data pipelines and finetune Foundry configurations to enhance system performance and efficiency. Unit Testing, Issues Identification, Debugging & Trouble shooting, End user documentation. Strong experience on Data Warehousing, Data Engineering, and Data Modelling problem statements. Knowledge of security related principles by ensuring data privacy and security while working with sensitive information. Familiarity with integrating machine learning and AI capabilities within the Palantir environment for advanced analytics. Non-Technical Collaborate with stakeholders to identify opportunities for continuous improvement, understanding business need and innovation in data processes and solutions. Ensure compliance with policies for data privacy, security, and regulatory requirements. Provide training and support to end-users to maximize the effective use of Palantir Foundry. Self-driven learning of technologies being adopted by the organizational requirements. Work as part of a team or individuals as engineer in a highly collaborative fashion EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 5 days ago
2.0 - 6.0 years
0 Lacs
haryana
On-site
At EY, you will have the opportunity to shape a career that is as unique as you are, leveraging the global reach, support, inclusive environment, and cutting-edge technology to unleash your full potential. Your distinctive voice and perspective are crucial in our journey towards continuous improvement at EY. By joining us, you will not only craft an exceptional experience for yourself but also contribute to creating a better working world for all. The mission of EY's GDS Tax Technology team is to design, implement, and integrate technology solutions that enhance client service and support engagement teams. As a member of EY's core Tax practice, you will deepen your tax technical expertise while honing your database, data analytics, and programming skills. In a landscape of ever-evolving regulations, tax departments are faced with the challenge of collecting, organizing, and analyzing vast amounts of data. This data often needs to be sourced from various systems and departments within an organization. Managing the diversity and volume of data efficiently poses significant challenges and time constraints for companies. Collaborating closely with EY partners, clients, and tax technical experts, members of the GDS Tax Technology team develop and integrate technology solutions that add value, enhance efficiencies, and equip clients with disruptive and cutting-edge tools to support Tax functions. GDS Tax Technology collaborates with clients and professionals in areas such as Federal Business Tax Services, Partnership Compliance, Corporate Compliance, Indirect Tax Services, Human Capital, and Internal Tax Services. The team offers solution architecture, application development, testing, and maintenance support to the global TAX service line, both proactively and in response to specific requests. EY is currently looking for a Data Engineer - Staff to join our Tax Technology practice in India. Key Responsibilities: - Must have proficiency in Azure Databricks. - Strong command of Python and PySpark programming is essential. - Solid understanding of Azure SQL Database and Azure SQL Datawarehouse concepts. - Develop, maintain, and optimize all data layer components for new and existing systems, including databases, stored procedures, ETL packages, and SQL queries. - Experience with Azure data platform offerings. - Effective communication with team members and stakeholders. Qualification & Experience Required: - Candidates should possess 1.5 to 3 years of experience in Azure Data Platform (Azure Databricks) with a strong grasp of Python and PySpark. - Excellent verbal and written communication skills. - Ability to work independently as a contributor. - Experience with Azure Data Factory, SSIS, or other ETL tools. Join EY in building a better working world, where diverse teams across 150 countries leverage data and technology to provide assurance, support growth, transformation, and operational excellence for clients. EY teams engage in assurance, consulting, law, strategy, tax, and transactions, asking insightful questions to address the complex challenges of today's world.,
Posted 5 days ago
3.0 years
0 Lacs
Kochi, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) –Senior - Palantir Job Overview Big Data Developer/Senior Data Engineer with 3 to 6+ years of experience who would display strong analytical, problem-solving, programming, Business KPIs understanding and communication skills. They should be self-learner, detail-oriented team members who can consistently meet deadlines and possess the ability to work independently as needed. He/she must be able to multi-task and demonstrate the ability to work with a diverse work group of stakeholders for healthcare/Life Science/Pharmaceutical domains. Responsibilities And Duties Technical - Design, develop, and maintain data models, integrations, and workflows within Palantir Foundry. Detailed understanding and Hands-on knowledge of Palantir Solutions (e.g., Usecare, DTI, Code Repository, Pipeline Builder etc.) Analysing data within Palantir to extract insights for easy interpretation and Exploratory Data Analysis (e.g., Contour). Querying and Programming Skills: Utilizing programming languages query or scripts (e.g., Python, SQL) to interact with the data and perform analyses. Understanding relational data structures and data modelling to optimize data storage and retrieval based on OLAP engine principles. Distributed Frameworks with Automation using Spark APIs (e.g., PySpark, Spark SQL, RDD/DF) to automate processes and workflows within Palantir with external libraries (e.g., Pandas, NumPy etc.). API Integration: Integrating Palantir with other systems and applications using APIs for seamless data flow. Understanding of integration analysis, specification, and solution design based on different scenarios (e.g., Batch/Realtime Flow, Incremental Load etc.). Optimize data pipelines and finetune Foundry configurations to enhance system performance and efficiency. Unit Testing, Issues Identification, Debugging & Trouble shooting, End user documentation. Strong experience on Data Warehousing, Data Engineering, and Data Modelling problem statements. Knowledge of security related principles by ensuring data privacy and security while working with sensitive information. Familiarity with integrating machine learning and AI capabilities within the Palantir environment for advanced analytics. Non-Technical Collaborate with stakeholders to identify opportunities for continuous improvement, understanding business need and innovation in data processes and solutions. Ensure compliance with policies for data privacy, security, and regulatory requirements. Provide training and support to end-users to maximize the effective use of Palantir Foundry. Self-driven learning of technologies being adopted by the organizational requirements. Work as part of a team or individuals as engineer in a highly collaborative fashion EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 5 days ago
3.0 years
0 Lacs
Kanayannur, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) –Senior - Palantir Job Overview Big Data Developer/Senior Data Engineer with 3 to 6+ years of experience who would display strong analytical, problem-solving, programming, Business KPIs understanding and communication skills. They should be self-learner, detail-oriented team members who can consistently meet deadlines and possess the ability to work independently as needed. He/she must be able to multi-task and demonstrate the ability to work with a diverse work group of stakeholders for healthcare/Life Science/Pharmaceutical domains. Responsibilities And Duties Technical - Design, develop, and maintain data models, integrations, and workflows within Palantir Foundry. Detailed understanding and Hands-on knowledge of Palantir Solutions (e.g., Usecare, DTI, Code Repository, Pipeline Builder etc.) Analysing data within Palantir to extract insights for easy interpretation and Exploratory Data Analysis (e.g., Contour). Querying and Programming Skills: Utilizing programming languages query or scripts (e.g., Python, SQL) to interact with the data and perform analyses. Understanding relational data structures and data modelling to optimize data storage and retrieval based on OLAP engine principles. Distributed Frameworks with Automation using Spark APIs (e.g., PySpark, Spark SQL, RDD/DF) to automate processes and workflows within Palantir with external libraries (e.g., Pandas, NumPy etc.). API Integration: Integrating Palantir with other systems and applications using APIs for seamless data flow. Understanding of integration analysis, specification, and solution design based on different scenarios (e.g., Batch/Realtime Flow, Incremental Load etc.). Optimize data pipelines and finetune Foundry configurations to enhance system performance and efficiency. Unit Testing, Issues Identification, Debugging & Trouble shooting, End user documentation. Strong experience on Data Warehousing, Data Engineering, and Data Modelling problem statements. Knowledge of security related principles by ensuring data privacy and security while working with sensitive information. Familiarity with integrating machine learning and AI capabilities within the Palantir environment for advanced analytics. Non-Technical Collaborate with stakeholders to identify opportunities for continuous improvement, understanding business need and innovation in data processes and solutions. Ensure compliance with policies for data privacy, security, and regulatory requirements. Provide training and support to end-users to maximize the effective use of Palantir Foundry. Self-driven learning of technologies being adopted by the organizational requirements. Work as part of a team or individuals as engineer in a highly collaborative fashion EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 5 days ago
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) –Senior - Palantir Job Overview Big Data Developer/Senior Data Engineer with 3 to 6+ years of experience who would display strong analytical, problem-solving, programming, Business KPIs understanding and communication skills. They should be self-learner, detail-oriented team members who can consistently meet deadlines and possess the ability to work independently as needed. He/she must be able to multi-task and demonstrate the ability to work with a diverse work group of stakeholders for healthcare/Life Science/Pharmaceutical domains. Responsibilities And Duties Technical - Design, develop, and maintain data models, integrations, and workflows within Palantir Foundry. Detailed understanding and Hands-on knowledge of Palantir Solutions (e.g., Usecare, DTI, Code Repository, Pipeline Builder etc.) Analysing data within Palantir to extract insights for easy interpretation and Exploratory Data Analysis (e.g., Contour). Querying and Programming Skills: Utilizing programming languages query or scripts (e.g., Python, SQL) to interact with the data and perform analyses. Understanding relational data structures and data modelling to optimize data storage and retrieval based on OLAP engine principles. Distributed Frameworks with Automation using Spark APIs (e.g., PySpark, Spark SQL, RDD/DF) to automate processes and workflows within Palantir with external libraries (e.g., Pandas, NumPy etc.). API Integration: Integrating Palantir with other systems and applications using APIs for seamless data flow. Understanding of integration analysis, specification, and solution design based on different scenarios (e.g., Batch/Realtime Flow, Incremental Load etc.). Optimize data pipelines and finetune Foundry configurations to enhance system performance and efficiency. Unit Testing, Issues Identification, Debugging & Trouble shooting, End user documentation. Strong experience on Data Warehousing, Data Engineering, and Data Modelling problem statements. Knowledge of security related principles by ensuring data privacy and security while working with sensitive information. Familiarity with integrating machine learning and AI capabilities within the Palantir environment for advanced analytics. Non-Technical Collaborate with stakeholders to identify opportunities for continuous improvement, understanding business need and innovation in data processes and solutions. Ensure compliance with policies for data privacy, security, and regulatory requirements. Provide training and support to end-users to maximize the effective use of Palantir Foundry. Self-driven learning of technologies being adopted by the organizational requirements. Work as part of a team or individuals as engineer in a highly collaborative fashion EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 5 days ago
3.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) –Senior - Palantir Job Overview Big Data Developer/Senior Data Engineer with 3 to 6+ years of experience who would display strong analytical, problem-solving, programming, Business KPIs understanding and communication skills. They should be self-learner, detail-oriented team members who can consistently meet deadlines and possess the ability to work independently as needed. He/she must be able to multi-task and demonstrate the ability to work with a diverse work group of stakeholders for healthcare/Life Science/Pharmaceutical domains. Responsibilities And Duties Technical - Design, develop, and maintain data models, integrations, and workflows within Palantir Foundry. Detailed understanding and Hands-on knowledge of Palantir Solutions (e.g., Usecare, DTI, Code Repository, Pipeline Builder etc.) Analysing data within Palantir to extract insights for easy interpretation and Exploratory Data Analysis (e.g., Contour). Querying and Programming Skills: Utilizing programming languages query or scripts (e.g., Python, SQL) to interact with the data and perform analyses. Understanding relational data structures and data modelling to optimize data storage and retrieval based on OLAP engine principles. Distributed Frameworks with Automation using Spark APIs (e.g., PySpark, Spark SQL, RDD/DF) to automate processes and workflows within Palantir with external libraries (e.g., Pandas, NumPy etc.). API Integration: Integrating Palantir with other systems and applications using APIs for seamless data flow. Understanding of integration analysis, specification, and solution design based on different scenarios (e.g., Batch/Realtime Flow, Incremental Load etc.). Optimize data pipelines and finetune Foundry configurations to enhance system performance and efficiency. Unit Testing, Issues Identification, Debugging & Trouble shooting, End user documentation. Strong experience on Data Warehousing, Data Engineering, and Data Modelling problem statements. Knowledge of security related principles by ensuring data privacy and security while working with sensitive information. Familiarity with integrating machine learning and AI capabilities within the Palantir environment for advanced analytics. Non-Technical Collaborate with stakeholders to identify opportunities for continuous improvement, understanding business need and innovation in data processes and solutions. Ensure compliance with policies for data privacy, security, and regulatory requirements. Provide training and support to end-users to maximize the effective use of Palantir Foundry. Self-driven learning of technologies being adopted by the organizational requirements. Work as part of a team or individuals as engineer in a highly collaborative fashion EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 5 days ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
About StatusNeo: At StatusNeo, we are committed to redefining the way businesses operate. As a leader in digital transformation, we leverage cutting-edge technologies and innovative strategies to empower organizations around the globe. Our partnerships with industry giants and our commitment to continuous learning and improvement provide an unparalleled platform for professional growth. Embrace a career at StatusNeo, where we value diversity, inclusivity and foster a hybrid work culture. Role: Data Engineer Location: Gurugram Key experience: - 3+ years of experience with AWS services including SQS, S3, Step Functions, EFS, Lambda, and OpenSearch. - Strong experience in API integrations, including experience working with large-scale API endpoints. - Proficiency in PySpark for data processing and parallelism in large-scale ingestion pipelines. - Experience with AWS OpenSearch APIs for managing search indices. - Terraform expertise for automating and managing cloud infrastructure. - Hands-on experience with AWS SageMaker, including working with machine learning models and endpoints. - Strong understanding of data flow architectures, document stores, and journal-based systems. - Experience in parallelizing data processing workflows to meet strict performance and SLA requirements. - Familiarity with AWS tools like CloudWatch for monitoring pipeline performance. Additional Preferred Qualifications: - Strong problem-solving and debugging skills in distributed systems. - Prior experience in optimizing ingestion pipelines with a focus on cost-efficiency and scalability. - Solid understanding of distributed data processing and workflow orchestration in AWS environments. Soft Skills: - Strong communication and collaboration skills, with the ability to work effectively with cross-functional teams. - Ability to work in a fast-paced environment and deliver high-quality results under tight deadlines. - Analytical mindset, with a focus on performance optimization and continuous improvement.,
Posted 5 days ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Job : About Client: Our Client is a global IT services company headquartered in Southborough, Massachusetts, USA. Founded in 1996, with a revenue of $1.8B, with 35,000+ associates worldwide, specializes in digital engineering, and IT services company helping clients modernize their technology infrastructure, adopt cloud and AI solutions, and accelerate innovation. It partners with major firms in banking, healthcare, telecom, and media. Our Client is known for combining deep industry expertise with agile development practices, enabling scalable and cost-effective digital transformation. The company operates in over 50 locations across more than 25 countries, has delivery centers in Asia, Europe, and North America and is backed by Baring Private Equity Asia. Job Title : Python Developer Key Skills : Python,SQL,Application Developer, ETL,Pyspark. Experience : - 6- 8 Years Location: Hyderabad,Bangalore and Pune. Education Qualification : Any Graduation Work Mode : Hybrid Employment Type : Contract to Hire Notice Period : Immediate - 10 Days. Job Description: Python + SQL : Python + SQL , application dev , understanding to data background – ETL/data bricks/Pyspark. [not analyst, scientist],Good to have: Cloud - preferably on AWS not mandate.
Posted 5 days ago
5.0 - 9.0 years
0 Lacs
noida, uttar pradesh
On-site
Genpact is a global professional services and solutions firm committed to delivering outcomes that help shape the future. With a team of over 125,000 individuals across 30+ countries, we are driven by curiosity, entrepreneurial agility, and a desire to create lasting value for our clients. Our purpose, the relentless pursuit of a world that works better for people, empowers us to serve and transform leading enterprises, including the Fortune Global 500, utilizing our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. We are currently looking for a Principal Consultant - Data Scientist specializing in Azure Generative AI & Advanced Analytics. As a highly skilled and experienced professional, you will be responsible for developing and optimizing AI/ML models, analyzing complex datasets, and providing strategic recommendations for embedding models and Generative AI applications. Your role will be crucial in driving AI-driven insights and automation within our business. Responsibilities: - Collaborate with cross-functional teams to identify, analyze, and interpret complex datasets for actionable insights and data-driven decision-making. - Design, develop, and implement Generative AI solutions leveraging various platforms including AWS Bedrock, Azure OpenAI, Azure Machine Learning, and Cognitive Services. - Utilize Azure Document Intelligence to extract and process structured and unstructured data from diverse document sources. - Build and optimize data pipelines to efficiently process and analyze large-scale datasets. - Implement Agentic AI techniques to develop intelligent, autonomous systems capable of making decisions and taking actions. - Research, evaluate, and recommend embedding models, language models, and generative models for diverse business use cases. - Continuously monitor and assess the performance of AI models and data-driven solutions, refining and optimizing them as necessary. - Stay updated with the latest industry trends, tools, and technologies in data science, AI, and generative models to enhance existing solutions and develop new ones. - Mentor and guide junior team members to aid in their professional growth and skill development. - Ensure model explainability, fairness, and compliance with responsible AI principles. - Keep abreast of advancements in AI, ML, and data science and apply best practices to enhance business operations. Minimum Qualifications / Skills: - Bachelor's or Master's degree in Computer Science, Data Science, AI, Machine Learning, or a related field. - Experience in data science, machine learning, AI applications, generative AI prompt engineering, and creating custom models. - Proficiency in Python, TensorFlow, PyTorch, PySpark, Scikit-learn, and MLflow. - Hands-on experience with Azure AI services (Azure OpenAI, Azure Document Intelligence, Azure Machine Learning, Azure Synapse, Azure Data Factory, Data Bricks, RAG Pipeline). - Expertise in LLMs, transformer architectures, and embeddings. - Experience in building and optimizing end-to-end data pipelines. - Familiarity with vector databases, FAISS, Pinecone, and knowledge retrieval techniques. - Knowledge of Reinforcement Learning (RLHF), fine-tuning LLMs, and prompt engineering. - Strong analytical skills with the ability to translate business requirements into AI/ML solutions. - Excellent problem-solving, critical thinking, and communication skills. - Experience with cloud-native AI deployment, containerization (Docker, Kubernetes), and MLOps practices is advantageous. Preferred Qualifications / Skills: - Experience with multi-modal AI models and computer vision applications. - Exposure to LangChain, Semantic Kernel, RAG (Retrieval-Augmented Generation), and knowledge graphs. - Certifications in Microsoft Azure AI, Data Science, or ML Engineering. Job Title: Principal Consultant Location: India-Noida Schedule: Full-time Education Level: Bachelor's / Graduation / Equivalent Job Posting: Apr 11, 2025, 9:36:00 AM Unposting Date: May 11, 2025, 1:29:00 PM Master Skills List: Digital Job Category: Full Time,
Posted 5 days ago
5.0 - 9.0 years
0 Lacs
madurai, tamil nadu
On-site
You are an experienced Java architect responsible for designing and implementing sophisticated Java-based software solutions. Your role involves overseeing system architecture, selecting appropriate technologies, ensuring scalability and performance, collaborating with cross-functional teams, mentoring junior developers, and staying updated on emerging Java technologies, focusing on areas such as microservices, cloud computing, and high-availability systems. **Key Responsibilities:** **Architecture Design:** - Define overall system architecture for large-scale Java applications, including component design, data flow, and integration patterns. - Select appropriate Java frameworks and libraries based on project requirements. - Design for scalability, performance, and security considerations. - Implement microservices architecture where applicable. **Technology Evaluation and Selection:** - Research and evaluate new Java technologies, frameworks, and tools. - Stay updated on cloud platforms like AWS, Azure, and GCP for potential integration. - Make informed technology decisions based on project needs. **Development Leadership:** - Guide development teams on technical best practices and design patterns. - Provide code reviews and mentor junior developers. - Troubleshoot complex technical issues and design flaws. **Collaboration and Stakeholder Management:** - Work closely with product managers, business analysts, and other stakeholders to understand requirements. - Communicate technical concepts effectively to non-technical audiences. - Collaborate with DevOps teams to ensure smooth deployment and monitoring. **Performance Optimization:** - Identify performance bottlenecks and implement optimization strategies. - Monitor system health and performance metrics. **Essential skills for a Java architect:** - Deep expertise in Java Core concepts: Object-oriented programming, Collections, Concurrency, JVM internals. - Advanced Java frameworks: Spring Boot, Spring MVC, Hibernate, JPA. - Architectural patterns: Microservices, Event-driven architecture, RESTful APIs. - Database design and SQL: Proficiency in relational databases and SQL optimization, Proficiency in NO SQL (ElasticSearch/Opensearch). - Cloud computing knowledge: AWS, Azure, GCP. - Hands-on Experience in ETL, ELT. - Knowledge of Python, Pyspark would be an added advantage. - Strong communication and leadership skills. **Minimum Qualifications:** - Bachelor's degree in Computer Science, Information Technology, or a related field. - Deep expertise in Java Core concepts, Advanced Java frameworks, Architectural patterns, Database design and SQL, Cloud computing knowledge, Hands-on Experience in ETL, ELT, Knowledge of Python, Pyspark. - Strong communication and leadership skills. This is a full-time job for the position of Principal Consultant based in India-Madurai. If you possess the required qualifications and skills, we invite you to apply for this role.,
Posted 5 days ago
3.0 - 5.0 years
0 Lacs
Mumbai, Maharashtra
On-site
MS - Banking & FSMumbai Posted On 29 Jul 2025 End Date 27 Sep 2025 Required Experience 3 - 5 Years Basic Section No. Of Openings 1 Designation Test Engineer Closing Date 27 Sep 2025 Organisational MainBU Quality Engineering Sub BU MS - Banking & FS Country India Region MEA State Maharashtra City Mumbai Working Location Mumbai Client Location NA Skills Skill JAVA Highest Education No data available CERTIFICATION No data available Working Language No data available JOB DESCRIPTION Previous experience working as a QA automation engineer (2+ YoE). Advanced programming skills including test automation tools and CI/CD integration. Familiarity with programming script languages including Python and Spark. Expertise in data testing using Java/Scala, SQL, NoSQL and ETL processes. Databricks delta lake experience. Strong in database and data warehousing concepts Proficiency in Statistical procedures, Experiments and Machine Learning techniques Must have knowledge on basics of data analytics and data modelling. Ability to develop test automation frameworks. Ability to work as an individual contributor. Strong attention to detail. Familiarity with Git or other version control systems. Understanding of Agile development methodologies. Willingness to switch to manual testing whenever required. Excellent analytical skills and problem-solving skills. Detailed knowledge of application functions, bug fixing, and testing protocols. Good written and verbal communication skills. Azure data studio Run the test using Python / pyspark based frameworks Knowing Java will be an advantage"
Posted 5 days ago
8.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Category: Software Development/ Engineering Main location: India, Karnataka, Bangalore Position ID: J0725-1837 Employment Type: Full Time Position Description: Company Profile: At CGI, we’re a team of builders. We call our employees members because all who join CGI are building their own company - one that has grown to 72,000 professionals located in 40 countries. Founded in 1976, CGI is a leading IT and business process services firm committed to helping clients succeed. We have the global resources, expertise, stability and dedicated professionals needed to achieve. At CGI, we’re a team of builders. We call our employees members because all who join CGI are building their own company - one that has grown to 72,000 professionals located in 40 countries. Founded in 1976, CGI is a leading IT and business process services firm committed to helping clients succeed. We have the global resources, expertise, stability and dedicated professionals needed to achieve results for our clients - and for our members. Come grow with us. Learn more at www.cgi.com. This is a great opportunity to join a winning team. CGI offers a competitive compensation package with opportunities for growth and professional development. Benefits for full-time, permanent members start on the first day of employment and include a paid time-off program and profit participation and stock purchase plans. We wish to thank all applicants for their interest and effort in applying for this position, however, only candidates selected for interviews will be contacted. No unsolicited agency referrals please. Job Title: Lead Data Engineer and Developer Position: Tech Lead Experience:8+ Years Category: Software Development Main location: Hyderabad, Chennai Position ID: J0625-0503 Employment Type: Full Time Lead Data Engineers and Developers with clarity on execution, design, architecture and problem solving. Strong understanding of Cloud engineering concepts, particularly AWS. Participate in Sprint planning and squad operational activities to guide the team on right prioritization. SQL - Expert (Must have) AWS (Redshift/Lambda/Glue/SQS/SNS/Cloudwatch/Step function/CDK(or Terrafoam)) - Expert (Must have) Pyspark -Intermediate/Expert AWS Airflow - Intermediate (Nice of have) Python - Intermediate (Must have or Pyspark knowledge) Your future duties and responsibilities: Lead Data Engineers and Developers with clarity on execution, design, architecture and problem solving. Strong understanding of Cloud engineering concepts, particularly AWS. Participate in Sprint planning and squad operational activities to guide the team on right prioritization. Required qualifications to be successful in this role: Must have Skills: SQL - Expert (Must have) AWS (Redshift/Lambda/Glue/SQS/SNS/Cloudwatch/Step function/CDK(or Terrafoam)) - Expert (Must have) Pyspark -Intermediate/Expert Python - Intermediate (Must have or Pyspark knowledge) Good to have skills: AWS Airflow - Intermediate (Nice of have) Skills: Apache Spark Python SQL What you can expect from us: Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.
Posted 5 days ago
3.0 - 8.0 years
0 Lacs
karnataka
On-site
As a Data Engineer specializing in ETL, you should possess a minimum of 7 to 8 years of relevant experience in the field. This position is open across Pan India, and immediate joiners are highly preferred. You will be expected to demonstrate expertise in a range of mandatory skills, including ETL Developer, Synapse, Pyspark, ADF, SSIS, Databricks, SQL, Apache Airflow, and proficiency in Azure & AWS. It is important to note that proficiency in all the mentioned skills is a prerequisite for this role. The selection process for this position involves a total of three rounds - L1 with the External Panel, L2 with the Internal Panel, and L3 with the Client Round. Your responsibilities will include working as an ETL Developer for at least 7+ years, demonstrating proficiency in Pyspark for 5+ years, SSIS for 3 to 4+ years, Databricks for 4 to 4+ years, SQL for 6+ years, Apache Airflow for 4+ years, and experience in Azure and AWS for 3 to 4 years. Additionally, familiarity with Synapse for 3 to 4 years is required to excel in this role.,
Posted 5 days ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
You should have 3-6 years of experience as a developer on the Palantir Foundry platform. Along with this, a strong understanding of data integration, data modeling, and software development principles is required. Proficiency in languages like Python, PySpark, and Scala Spark is essential. Experience with SQL and relational databases is also a must. Your responsibilities will include designing, developing, and deploying models and applications within the Palantir Foundry platform. You will be integrating data from various sources to ensure the robustness and reliability of data pipelines. Customizing and configuring the platform to meet business requirements will also be part of your role. The position is at the Consultant level and is based in Hyderabad, Bangalore, Mumbai, Pune, Chennai, Kolkata, or Gurgaon. The notice period for this role is between 0-90 days.,
Posted 5 days ago
0.0 - 3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
You Lead the Way. We’ve Got Your Back. With the right backing, people and businesses have the power to progress in incredible ways. When you join Team Amex, you become part of a global and diverse community of colleagues with an unwavering commitment to back our customers, communities and each other. Here, you’ll learn and grow as we help you create a career journey that’s unique and meaningful to you with benefits, programs, and flexibility that support you personally and professionally. At American Express, you’ll be recognized for your contributions, leadership, and impact—every colleague has the opportunity to share in the company’s success. Together, we’ll win as a team, striving to uphold our company values and powerful backing promise to provide the world’s best customer experience every day. And we’ll do it with the utmost integrity, and in an environment where everyone is seen, heard and feels like they belong. Join Team Amex and let's lead the way together. American Express has embarked on an exciting transformation driven by an energetic new team of an inclusive pool of candidates to give all an equal opportunity for growth. Service Operations is responsible for providing reliable platforms for hundreds of critical applications and utilities within American Express Primary focus is to provide technical expertise and tooling to ensure the highest level of reliability and availability for critical applications. Able to provide consultation and strategic recommendations by quickly assessing and remediating complex availability issues. Responsible for driving automation, efficiencies to increase quality, availability, and auto-healing of complex processes. Responsibilities include, but not limited to: The Ideal candidate will be responsible for Designing, Developing and maintaining data pipelines. Serving as a core member of an agile team that drives user story analysis and elaboration, designs and develops responsive web applications using the best engineering practices You will closely work with data scientists, analysts and other partners to ensure the flawless flow of data. You will be Building and optimize reports for analytical and business purpose. Monitor and solve data pipelines issues to ensure smooth operation. Implementing data quality checks and validation process to ensure the accuracy completeness and consistency of data Implementing data governance policies , access controls , and security measures to protect critical data and ensure compliance. Developing deep understanding of integrations with other systems and platforms within the supported domains. Bring a culture of innovation, ideas, and continuous improvement. Challenging status quo, demonstrate risk taking, and implement creative ideas Lead your own time, and work well both independently and as part of a team. Adopt emerging standards while promoting best practices and consistent framework usage. Work with Product Owners to define requirements for new features and plan increments of work. Minimum Qualifications BS or MS degree in computer science, computer engineering, or other technical subject area or equivalent 0 to 3 years of work experience At least 1 to 3 years of hands-on experience with SQL, including schema design, query optimization and performance tuning. Experience with distributed computing frameworks like Hadoop,Hive,Spark for processing large scale data sets. Proficiency in any of the programming language python, pyspark for building data pipeline and automation scripts. Understanding of cloud computing and exposure to Big Query and Airflow to execute DAGs. knowledge of CICD, GIT commands and deployment process. Strong analytical and problem-solving skills, with the ability to troubleshoot complex data issues and optimize data processing workflows Excellent communication and collaboration skills. We back our colleagues and their loved ones with benefits and programs that support their holistic well-being. That means we prioritize their physical, financial, and mental health through each stage of life. Benefits include: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.
Posted 5 days ago
8.0 - 15.0 years
0 Lacs
navi mumbai, maharashtra
On-site
As the Manager- Data Science at Holcim, you will play a crucial role in the Groups Global Advanced Analytics CoE by enabling our Businesses for Insights Driven Operations and Decision making through the utilization of cutting edge Analytics tools and techniques. Your primary responsibility will involve working closely with Business / Domain Subject Matter Experts to understand pain points and opportunities, and develop analytical models to identify patterns and predict outcomes of key business processes. You will be required to identify the most suitable modeling techniques and apply Machine Learning and Deep Learning Algorithms to create self-correcting models and algorithms. It will be essential to collaborate with Product Development teams to industrialize AI / ML models and conduct rapid and iterative prototyping of minimum viable solutions. Additionally, you will test hypotheses on raw datasets to derive meaningful insights and identify new opportunity areas. Your role will encompass all aspects of data including data acquisition, data exploration, feature engineering, building and optimizing models, and deploying Gen AI Solutions. You will be involved in designing full stack ML solutions in a distributed computing environment such as AWS and GCP, with the possibility of developing ML solutions for deployment on the edge or mobile devices if required. To be successful in this role, you should possess a total experience of 12-15 years with at least 8 years of relevant Analytics experience. Industry Experience and knowledge, particularly in Manufacturing/ Operations Functions within the Building Material Industry, Manufacturing, Process, or Pharma sectors, is preferred. Hands-on experience in statistical and data science techniques, as well as developing and deploying Gen AI Solutions, will be critical for this position. In terms of technical skills, you should have over 8 years of hands-on experience in advanced Machine Learning & Deep Learning techniques and algorithms, such as Decision Trees, Random Forests, SVMs, Regression, Clustering, Neural Networks, CNNs, RNNs, LSTMs, and Transformers. Proficiency in statistical computer languages like Python and PySpark to manipulate data and draw insights from large datasets is essential. Experience with Cloud platforms like AWS, DL frameworks such as TensorFlow, Keras, or PyTorch, and familiarity with business intelligence tools and data frameworks will be advantageous. In addition to technical skills, leadership and soft skills are equally important for this role. You should lead by example on values and culture, be open-minded, collaborative, and an effective team player. Working in a multicultural and diverse team, dealing with ambiguity, and communicating openly and effectively with various stakeholders are key aspects of this position. Being driven for success, aspiring to a culture of service excellence, and always prioritizing customer satisfaction, people, and business are qualities that will set you up for success in this role. If you are motivated by the opportunity to make a significant impact through data-driven decisions and innovative solutions, we invite you to build your future with us at Holcim by applying for this role.,
Posted 5 days ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As an AWS Senior Data Engineer at our organization, you will be responsible for working with various technologies and tools to support the data engineering activities. Your primary tasks will include utilizing SQL for data querying and manipulation, developing data processing pipelines using Pyspark, and integrating data from API endpoints. Additionally, you will be expected to work with AWS services such as Glue for ETL processes, S3 for data storage, Redshift for data warehousing, Step Functions for workflow automation, Lambda for serverless computing, Cloudwatch for monitoring, and AppFlow for data integration. You should have experience with Cloud formation and administrative roles, as well as knowledge of SDLF & OF frameworks for data lifecycle management. Understanding S3 ingestion patterns and version control using Git is essential for this role. Exposure to tools like Jfrog, ADO, SNOW, Visual Studio, DBeaver, and SF inspector will be beneficial in supporting your data engineering tasks effectively. Your role will involve collaborating with cross-functional teams to ensure the successful implementation of data solutions within the AWS environment.,
Posted 5 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough