Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
2.0 - 5.0 years
6 - 10 Lacs
Chennai
Work from Office
We are currently seeking a Data Visualization Expert - Quick sight to join our team in Chennai, Tamil Ndu (IN-TN), India (IN). What awaits you/ Job Profile Design and develop data visualizations using Amazon QuickSight to present complex data in clear and understandable Dashboards. Create interactive dashboards and reports that allow end-users to explore data and draw meaningful conclusions. Work on Data preparation and ensure the good quality data is used in Visualization. Collaborate with data analysts and business stakeholders to understand data requirements, gather insights, and transform raw data into actionable visualizations. Ensure that the data visualizations are user-friendly, intuitive, and aesthetically pleasing. Optimize the user experience by incorporating best practices. Identify and address performance bottlenecks in data queries and visualization. Ensure compliance with data security policies and governance guidelines when handling sensitive data within QuickSight. Provide training and support to end-users and stakeholders on how to interact with Dashboards. Self-Managing and explore the latest technical development and incorporate in the project. Experience in analytics, reporting and business intelligence tools. Using the Agile Methodology, attending daily standups and use of the Agile tools. Lead Technical discussions with customers to find the best possible solutions. What should you bring along Must Have Overall experience of 2-5 years in Data visualization development. Minimum of 2 years in QuickSight and 1-2 years in other BI Tools like Tableau, PowerBI, Qlik Good In writing complex SQL Scripting, Dataset Modeling. Hands on in AWS -Athena, RDS, S3, IAM, permissions, Logging and monitoring Services. Experience working with various data sources and databases like Oracle, mySQL, S3, Athena. Ability to work with large datasets and design efficient data models for visualization. Prior experience in working in Agile, Scrum/Kanban working model. Nice to Have Knowledge on Data ingestion and Data pipeline in AWS. Knowledge Amazon Q or AWS LLM Service to enable AI integration Must have skill Quick sight, Tableau, SQL , AWS Good to have skills Qlikview ,Data Engineer, AWS LLM
Posted 1 week ago
4.0 - 9.0 years
3 - 7 Lacs
Chennai
Work from Office
Req ID: 324631 We are currently seeking a Data Engineer to join our team in Chennai, Tamil Ndu (IN-TN), India (IN). Key Responsibilities: Design and implement tailored data solutions to meet customer needs and use cases, spanning from streaming to data lakes, analytics, and beyond within a dynamically evolving technical stack. Provide thought leadership by recommending the most appropriate technologies and solutions for a given use case, covering the entire spectrum from the application layer to infrastructure. Demonstrate proficiency in coding skills, utilizing languages such as Python, Java, and Scala to efficiently move solutions into production while prioritizing performance, security, scalability, and robust data integrations. Collaborate seamlessly across diverse technical stacks, including Cloudera, Databricks, Snowflake, and AWS. Develop and deliver detailed presentations to effectively communicate complex technical concepts. Generate comprehensive solution documentation, including sequence diagrams, class hierarchies, logical system views, etc. Adhere to Agile practices throughout the solution development process. Design, build, and deploy databases and data stores to support organizational requirements. Basic Qualifications: 4+ years of experience supporting Software Engineering, Data Engineering, or Data Analytics projects. 2+ years of experience leading a team supporting data related projects to develop end-to-end technical solutions. Experience with Informatica, Python, Databricks, Azure Data Engineer Ability to travel at least 25%." Preferred Skills: Demonstrate production experience in core data platforms such as Snowflake, Databricks, AWS, Azure, GCP, Hadoop, and more. Possess hands-on knowledge of Cloud and Distributed Data Storage, including expertise in HDFS, S3, ADLS, GCS, Kudu, ElasticSearch/Solr, Cassandra, or other NoSQL storage systems. Exhibit a strong understanding of Data integration technologies, encompassing Informatica, Spark, Kafka, eventing/streaming, Streamsets, NiFi, AWS Data Migration Services, Azure DataFactory, Google DataProc. Showcase professional written and verbal communication skills to effectively convey complex technical concepts. Undergraduate or Graduate degree preferred
Posted 1 week ago
4.0 - 9.0 years
3 - 7 Lacs
Chennai
Work from Office
Req ID: 324632 We are currently seeking a Data Engineer to join our team in Chennai, Tamil Ndu (IN-TN), India (IN). Key Responsibilities: Design and implement tailored data solutions to meet customer needs and use cases, spanning from streaming to data lakes, analytics, and beyond within a dynamically evolving technical stack. Provide thought leadership by recommending the most appropriate technologies and solutions for a given use case, covering the entire spectrum from the application layer to infrastructure. Demonstrate proficiency in coding skills, utilizing languages such as Python, Java, and Scala to efficiently move solutions into production while prioritizing performance, security, scalability, and robust data integrations. Collaborate seamlessly across diverse technical stacks, including Cloudera, Databricks, Snowflake, and AWS. Develop and deliver detailed presentations to effectively communicate complex technical concepts. Generate comprehensive solution documentation, including sequence diagrams, class hierarchies, logical system views, etc. Adhere to Agile practices throughout the solution development process. Design, build, and deploy databases and data stores to support organizational requirements. Basic Qualifications: 4+ years of experience supporting Software Engineering, Data Engineering, or Data Analytics projects. 2+ years of experience leading a team supporting data related projects to develop end-to-end technical solutions. Experience with Informatica, Python, Databricks, Azure Data Engineer Ability to travel at least 25%." Preferred Skills: Demonstrate production experience in core data platforms such as Snowflake, Databricks, AWS, Azure, GCP, Hadoop, and more. Possess hands-on knowledge of Cloud and Distributed Data Storage, including expertise in HDFS, S3, ADLS, GCS, Kudu, ElasticSearch/Solr, Cassandra, or other NoSQL storage systems. Exhibit a strong understanding of Data integration technologies, encompassing Informatica, Spark, Kafka, eventing/streaming, Streamsets, NiFi, AWS Data Migration Services, Azure DataFactory, Google DataProc. Showcase professional written and verbal communication skills to effectively convey complex technical concepts. Undergraduate or Graduate degree preferred
Posted 1 week ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Manager Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description & Summary: A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities Design, develop, and optimize data pipelines and ETL processes using PySpark or Scala to extract, transform, and load large volumes of structured and unstructured data from diverse sources. Implement data ingestion, processing, and storage solutions on Azure cloud platform, leveraging services such as Azure Databricks, Azure Data Lake Storage, and Azure Synapse Analytics. Develop and maintain data models, schemas, and metadata to support efficient data access, query performance, and analytics requirements. Monitor pipeline performance, troubleshoot issues, and optimize data processing workflows for scalability, reliability, and cost-effectiveness. Implement data security and compliance measures to protect sensitive information and ensure regulatory compliance. Requirement Proven experience as a Data Engineer, with expertise in building and optimizing data pipelines using PySpark, Scala, and Apache Spark. Hands-on experience with cloud platforms, particularly Azure, and proficiency in Azure services such as Azure Databricks, Azure Data Lake Storage, Azure Synapse Analytics, and Azure SQL Database. Strong programming skills in Python and Scala, with experience in software development, version control, and CI/CD practices. Familiarity with data warehousing concepts, dimensional modeling, and relational databases (e.g., SQL Server, PostgreSQL, MySQL). Experience with big data technologies and frameworks (e.g., Hadoop, Hive, HBase) is a plus. Mandatory Skill Sets Spark, Pyspark, Azure Preferred Skill Sets Spark, Pyspark, Azure Years Of Experience Required 8 - 12 Education Qualification B.Tech / M.Tech / MBA / MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Engineering, Master of Business Administration Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Data Science Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Coaching and Feedback, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling {+ 32 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. ML Ops Engineer (Senior Consultant) Key Responsibilities: Lead the design, implementation, and maintenance of scalable ML infrastructure. Collaborate with data scientists to deploy, monitor, and optimize machine learning models. Automate complex data processing workflows and ensure data quality. Optimize and manage cloud resources for cost-effective operations. Develop and maintain robust CI/CD pipelines for ML models. Troubleshoot and resolve advanced issues related to ML infrastructure and deployments. Mentor and guide junior team members, fostering a culture of continuous learning. Work closely with cross-functional teams to understand requirements and deliver innovative solutions. Drive best practices and standards for ML Ops within the organization. Required Skills and Experience: Minimum 5 years of experience in infrastructure engineering. Proficiency in using EMR (Elastic MapReduce) for large-scale data processing. Extensive experience with SageMaker, ECR, S3, Lamba functions, Cloud capabilities and deployment of ML models. Strong proficiency in Python scripting and other programming languages. Experience with CI/CD tools and practices. Solid understanding of the machine learning lifecycle and best practices. Strong problem-solving skills and attention to detail. Excellent communication skills and ability to work collaboratively in a team environment. Demonstrated ability to take ownership and drive projects to completion. Proven experience in leading and mentoring teams. Beneficial Skills and Experience: Experience with containerization and orchestration tools (Docker, Kubernetes). Familiarity with data visualization tools and techniques. Knowledge of big data technologies (Spark, Hadoop). Experience with version control systems (Git). Understanding of data governance and security best practices. Experience with monitoring and logging tools (Prometheus, Grafana). Stakeholder management skills and ability to communicate technical concepts to non-technical audiences. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description Controls engineering is responsible for building the next generation firm-wide control plane for our front office desks. The successful candidate will use their deep technical skills to inform the implementation of a highly scalable message driven architecture, processing ~3bn messages per day and making ‘safe to trade’ determinations in real time. The role will also involve building out web applications that allow users to register, develop and administer controls on the platform. Role Overview This role offers the opportunity to work in a competitive & nimble team of implement high performance code using open-source libraries. Candidates will work directly with a variety of stakeholders, including the product managers and Global Banking & Markets risk managers to improve our controls data platform. The team is in London and India focuses on the control data solution for Global Banking & Markets Operational Risk and delivering new features. Use data to guide decision-making, developing or enhancing tools as necessary to collect it. Understand market rules, regulations, exchange service offerings, front to back business functions and build systems to facilitate them. Communication with traders, sales, clients and compliance officers about new systems, feature requests, explanation of existing features etc. Bar raise solution design and ensure development best practices are followed within delivery teams. Job Duties Delivering and designing new features for Control Solutions Team. Investigate incidents to review and redesign existing flows to improve platform stability. Contribute to SDLC documentation and guidance including templates, patterns, and controls. Actively participate as a member of a global team on larger development projects, assume responsibilities of components of global projects, depending on need Collaborate with engineering leadership, developers, and operations through written and verbal presentations. Minimum Education Requirements/Degree And Field Bachelor’s degree Computer Science, Information Technology, or a related field. Minimum Years Experience Required Six (6) years of experience in the job offered or in a related data engineering, software engineering or full-stack software engineering position. Special Skills And/Or Licenses Required To Perform The Job Prior employment must include six (6) year of experience with: Working with software engineering principles and practices Working knowledge of at least 2 High Level Programming Languages like Java or Python Working knowledge of algorithms, data structures and enterprise applications Formulating clear and concise written and verbal descriptions of Software and System for engineering stakeholders and tracking and managing delivery of the same Strong communication skills and the ability to work in a team. Strong analytical and problem-solving skills. Ability to solve high performance engineering problems in a language agnostic manner. Preferred Qualifications Experience with Kubernetes deployment architectures Experience building trading controls within an investment bank. Experience in distributed systems (Kafka, Flink) Experience with UI technologies like React, Javascript Experience in micro services architecture Experience with NoSQL (Mongo, Elastic, Hadoop), in memory (MEMSQL, Ignite), cloud (Snowflake) and relational (DB2, SybaseIQ) data store solutions. Experience in data driven performance analysis and optimizations. About Goldman Sachs At Goldman Sachs, we commit our people, capital and ideas to help our clients, shareholders and the communities we serve to grow. Founded in 1869, we are a leading global investment banking, securities and investment management firm. Headquartered in New York, we maintain offices around the world. We believe who you are makes you better at what you do. We're committed to fostering and advancing diversity and inclusion in our own workplace and beyond by ensuring every individual within our firm has a number of opportunities to grow professionally and personally, from our training and development opportunities and firmwide networks to benefits, wellness and personal finance offerings and mindfulness programs. Learn more about our culture, benefits, and people at GS.com/careers. We’re committed to finding reasonable accommodations for candidates with special needs or disabilities during our recruiting process. Learn more: https://www.goldmansachs.com/careers/footer/disability-statement.html © The Goldman Sachs Group, Inc., 2023. All rights reserved. Goldman Sachs is an equal employment/affirmative action employer Female/Minority/Disability/Veteran/Sexual Orientation/Gender Identity Show more Show less
Posted 1 week ago
4.0 - 6.0 years
15 - 25 Lacs
Noida
Work from Office
We are looking for a highly experienced Senior Data Engineer with deep expertise in Snowflake to lead efforts in optimizing the performance of our data warehouse to enable faster, more reliable reporting. You will be responsible for improving query efficiency, data pipeline performance, and overall reporting speed by tuning Snowflake environments, optimizing data models, and collaborating with Application development teams. Roles and Responsibilities Analyze and optimize Snowflake data warehouse performance to support high-volume, complex reporting workloads. Identify bottlenecks in SQL queries, ETL/ELT pipelines, and data models impacting report generation times. Implement performance tuning strategies including clustering keys, materialized views, result caching, micro-partitioning, and query optimization. Collaborate with BI teams and business analysts to understand reporting requirements and translate them into performant data solutions. Design and maintain efficient data models (star schema, snowflake schema) tailored for fast analytical querying. Develop and enhance ETL/ELT processes ensuring minimal latency and high throughput using Snowflake’s native features. Monitor system performance and proactively recommend architectural improvements and capacity planning. Establish best practices for data ingestion, transformation, and storage aimed at improving report delivery times. Experience with Unistore will be an added advantage
Posted 1 week ago
6.0 - 11.0 years
20 - 35 Lacs
Bengaluru
Work from Office
NOTE: We are only looking for candidates who can join Immediately to available to join in 15 days Experience level- 6+ years Location: Bangalore (Candidates who are currently in Bangalore can apply) Qualifications we are looking for Master/Bachelor degree in Computer Science, Electrical Engineering, Information Systems or other technical discipline; advanced degree preferred. Minimum of 7+ years of software development experience (with a concentration in data centric initiatives), with demonstrated expertise in leveraging standard development best practice methodologies. Minimum 4+ years of experience in Hadoop using Core Java Programming, Spark, Scala, Hive and Go lang Expertise in Object Oriented Programming Language Java Experience using CI/CD Process, version control and bug tracking tools. Experience in handling very large data volume in Real Time and batch mode. Experience with automation of job execution and validation Strong knowledge of Database concepts Strong team player. Strong communication skills with proven ability to present complex ideas and document in a clear and concise way. Quick learner; self-starter, detailed and in-depth.
Posted 1 week ago
5.0 years
0 Lacs
Kanayannur, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. ML Ops Engineer (Senior Consultant) Key Responsibilities: Lead the design, implementation, and maintenance of scalable ML infrastructure. Collaborate with data scientists to deploy, monitor, and optimize machine learning models. Automate complex data processing workflows and ensure data quality. Optimize and manage cloud resources for cost-effective operations. Develop and maintain robust CI/CD pipelines for ML models. Troubleshoot and resolve advanced issues related to ML infrastructure and deployments. Mentor and guide junior team members, fostering a culture of continuous learning. Work closely with cross-functional teams to understand requirements and deliver innovative solutions. Drive best practices and standards for ML Ops within the organization. Required Skills and Experience: Minimum 5 years of experience in infrastructure engineering. Proficiency in using EMR (Elastic MapReduce) for large-scale data processing. Extensive experience with SageMaker, ECR, S3, Lamba functions, Cloud capabilities and deployment of ML models. Strong proficiency in Python scripting and other programming languages. Experience with CI/CD tools and practices. Solid understanding of the machine learning lifecycle and best practices. Strong problem-solving skills and attention to detail. Excellent communication skills and ability to work collaboratively in a team environment. Demonstrated ability to take ownership and drive projects to completion. Proven experience in leading and mentoring teams. Beneficial Skills and Experience: Experience with containerization and orchestration tools (Docker, Kubernetes). Familiarity with data visualization tools and techniques. Knowledge of big data technologies (Spark, Hadoop). Experience with version control systems (Git). Understanding of data governance and security best practices. Experience with monitoring and logging tools (Prometheus, Grafana). Stakeholder management skills and ability to communicate technical concepts to non-technical audiences. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Kochi, Kerala, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Preferred Education Master's Degree Required Technical And Professional Expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies. Preferred Technical And Professional Experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Equifax is seeking creative, high-energy and driven d ata engineers with hands-on development skills to work on a variety of meaningful projects. Our software engineering positions provide you the opportunity to join a team of talented engineers working with leading-edge technology. You are ideal for this position if you are a forward-thinking, committed, and enthusiastic software engineer who is passionate about technology. What you’ll do? Perform general application development activities, including unit testing, code deployment to development environment and technical documentation. Works on one or more projects, making contributions to unfamiliar code written by team members. Participates in estimation process, use case specifications, reviews of test plans and test cases, requirements, and project planning. Diagnose and resolve performance issues. Documents code/processes so that any other developer is able to dive in with minimal effort. Develop, and operate high scale applications from the backend to UI layer, focusing on operational excellence, security and scalability. Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.). Work across teams to integrate our systems with existing internal systems, Data Fabric, CSA Toolset. Participate in technology roadmap and architecture discussions to turn business requirements and vision into reality. Participate in a tight-knit engineering team employing agile software development practices. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on network, or service operations and quality. Able to write, debug, and troubleshoot code in mainstream open source technologies. Lead effort for Sprint deliverables, and solve problems with medium complexity What Experience You Need Bachelor's degree or equivalent experience 3+ years experience working with software design - Core Java, Java, Spark and SQL programming language expertise What Could Set You Apart Knowledge or experience with Apache Beam for stream and batch data processing. Familiarity with big data tools and technologies like Apache Kafka, Hadoop, or Spark. Experience with containerization and orchestration tools (e.g., Docker, Kubernetes). Exposure to data visualization tools or platforms. Show more Show less
Posted 1 week ago
8.0 - 13.0 years
35 - 50 Lacs
Mumbai
Work from Office
Hiring Big Data Lead with 8+ years experience for US Shift time: Must Have: - Big Data: Spark, Hadoop, Kafka, Hive, Flink - Backend: Python, Scala - NoSQL: MongoDB, Cassandra - Cloud: AWS/AZURE/GCP, Snowflake, Databricks - Docker, Kubernetes, CI/CD Required Candidate profile - Excellent in Mentoring/ Training in Big Data- HDFS, YARN, Airflow, Hive, Mapreduce, Hbase, Kafka & ETL/ELT, real-time streaming, data modeling - Immediate Joiner is plus - Excellent in Communication
Posted 1 week ago
5.0 - 10.0 years
7 - 17 Lacs
Hyderabad
Work from Office
Job Title: Data Engineer Experience: 5+ Years Location: Hyderabad (Onsite) Availability: Immediate Joiners Preferred Job Description: We are seeking an experienced Data Engineer with a strong background in Java, Spark, and Scala to join our dynamic team in Hyderabad. The ideal candidate will be responsible for building scalable data pipelines, optimizing data processing workflows, and supporting data-driven solutions for enterprise-grade applications. This is a full-time onsite role. Key Responsibilities: Design, develop, and maintain robust and scalable data processing pipelines. Work with large-scale data using distributed computing technologies like Apache Spark. Develop applications and data integration workflows using Java and Scala. Collaborate with cross-functional teams including Data Scientists, Analysts, and Product Managers. Ensure data quality, integrity, and security in all data engineering solutions. Monitor and troubleshoot performance and data issues in production systems. Must-Have Skills: Strong hands-on experience with Java , Apache Spark , and Scala . Proven experience working on large-scale data processing systems. Solid understanding of distributed systems and performance tuning. Good-to-Have Skills: Experience with Hadoop , Hive , and HDFS . Familiarity with data warehousing concepts and ETL processes. Exposure to cloud data platforms is a plus. Desired Candidate Profile: 5+ years of relevant experience in data engineering or big data technologies. Strong problem-solving and analytical skills. Excellent communication and collaboration skills. Ability to work independently in a fast-paced environment. Additional Details: Work Mode: Onsite (Hyderabad) Employment Type: Full-time Notice Period: Immediate joiners highly preferred, candidates serving notice period.
Posted 1 week ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Associate Analyst, R Programmer-3 Overview The Mastercard Economics Institute (MEI) is an economics lab powering scale at Mastercard by owning economic thought leadership in support of Mastercard’s efforts to build a more inclusive and sustainable digital economy The Economics Institute was launched in 2020 to analyze economic trends through the lens of the consumer to deliver tailored and actionable insights on economic issues for customers, partners and policymakers The Institute is composed of a team of economists and data scientists that utilize & synthesize the anonymized and aggregated data from the Mastercard network together with public data to bring powerful insights to life, in the form of 1:1 presentation, global thought leadership, media participation, and commercial work through the company’s product suites About The Role We are looking for an R programmer to join Mastercard’s Economics Institute, reporting to the team lead for Economics Technology. An individual who will: create clear, compelling data visualisations that communicate economic insights to diverse audiences develop reusable R functions and packages to support analysis and automation create and format analytical content using R Markdown and/or Quarto design and build scalable Shiny apps develop interactive visualisations using JavaScript charting libraries (e.g. Plotly, Highcharts, D3.js) or front-end frameworks (e.g. React, Angular, Vue.js)work with databases and data platforms (eg. SQL, Hadoop) write clear, well-documented code that others can understand and maintain collaborate using Git for version control All About You proficient in R and the RStudio IDE proficient in R packages like dplyr for data cleaning, transformation, and aggregation familiarity with dependency management and documentation in R (e.g. roxygen2) familiar with version control concepts and tools (e.g. Git, GitHub, Bitbucket) for collaborative development experience writing SQL and working with relational databases creative and passionate about data, coding, and technology strong collaborator who can also work independently organized and able to prioritise work across multiple projects comfortable working with engineers, product owners, data scientists, economists Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-250450 Show more Show less
Posted 1 week ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Associate Analyst, R Programmer-2 Overview The Mastercard Economics Institute (MEI) is an economics lab powering scale at Mastercard by owning economic thought leadership in support of Mastercard’s efforts to build a more inclusive and sustainable digital economy The Economics Institute was launched in 2020 to analyze economic trends through the lens of the consumer to deliver tailored and actionable insights on economic issues for customers, partners and policymakers The Institute is composed of a team of economists and data scientists that utilize & synthesize the anonymized and aggregated data from the Mastercard network together with public data to bring powerful insights to life, in the form of 1:1 presentation, global thought leadership, media participation, and commercial work through the company’s product suites About The Role We are looking for an R programmer to join Mastercard’s Economics Institute, reporting to the team lead for Economics Technology. An individual who will: create clear, compelling data visualisations that communicate economic insights to diverse audiences develop reusable R functions and packages to support analysis and automation create and format analytical content using R Markdown and/or Quarto design and build scalable Shiny apps develop interactive visualisations using JavaScript charting libraries (e.g. Plotly, Highcharts, D3.js) or front-end frameworks (e.g. React, Angular, Vue.js)work with databases and data platforms (eg. SQL, Hadoop) write clear, well-documented code that others can understand and maintain collaborate using Git for version control All About You proficient in R and the RStudio IDE proficient in R packages like dplyr for data cleaning, transformation, and aggregation familiarity with dependency management and documentation in R (e.g. roxygen2) familiar with version control concepts and tools (e.g. Git, GitHub, Bitbucket) for collaborative development experience writing SQL and working with relational databases creative and passionate about data, coding, and technology strong collaborator who can also work independently organized and able to prioritise work across multiple projects comfortable working with engineers, product owners, data scientists, economists Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-250449 Show more Show less
Posted 1 week ago
6.0 - 8.0 years
20 - 30 Lacs
Nagpur, Pune
Work from Office
Build and maintain scalable Big Data pipelines using Hadoop, PySpark, and SQL for batch and real-time processing. Collaborate with cross-functional teams to transform, optimize, and secure large datasets while ensuring data quality and performance.
Posted 1 week ago
8.0 - 10.0 years
20 - 32 Lacs
Nagpur, Pune
Work from Office
Design and implement scalable Big Data architecture and pipelines using tools like Hadoop, Spark, Kafka, and Hive. Collaborate with cross-functional teams to build real-time/batch systems, ensure data quality and governance.
Posted 1 week ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Associate Analyst, R Programmer-1 Overview The Mastercard Economics Institute (MEI) is an economics lab powering scale at Mastercard by owning economic thought leadership in support of Mastercard’s efforts to build a more inclusive and sustainable digital economy The Economics Institute was launched in 2020 to analyze economic trends through the lens of the consumer to deliver tailored and actionable insights on economic issues for customers, partners and policymakers The Institute is composed of a team of economists and data scientists that utilize & synthesize the anonymized and aggregated data from the Mastercard network together with public data to bring powerful insights to life, in the form of 1:1 presentation, global thought leadership, media participation, and commercial work through the company’s product suites About The Role We are looking for an R programmer to join Mastercard’s Economics Institute, reporting to the team lead for Economics Technology. An individual who will: create clear, compelling data visualisations that communicate economic insights to diverse audiences develop reusable R functions and packages to support analysis and automation create and format analytical content using R Markdown and/or Quarto design and build scalable Shiny apps develop interactive visualisations using JavaScript charting libraries (e.g. Plotly, Highcharts, D3.js) or front-end frameworks (e.g. React, Angular, Vue.js)work with databases and data platforms (eg. SQL, Hadoop) write clear, well-documented code that others can understand and maintain collaborate using Git for version control All About You proficient in R and the RStudio IDE proficient in R packages like dplyr for data cleaning, transformation, and aggregation familiarity with dependency management and documentation in R (e.g. roxygen2) familiar with version control concepts and tools (e.g. Git, GitHub, Bitbucket) for collaborative development experience writing SQL and working with relational databases creative and passionate about data, coding, and technology strong collaborator who can also work independently organized and able to prioritise work across multiple projects comfortable working with engineers, product owners, data scientists, economists Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-250448 Show more Show less
Posted 1 week ago
10.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
DESCRIPTION Amazon has built a global reputation for being the most customer-centric company, a company that customers from all over the world recognize, value, and trust for both our products and services. Amazon has a fast-paced environment where we “Work Hard, Have Fun and Make History.” As an increasing number of enterprises move their critical systems to the cloud, AWS India is in need of highly efficient technical consulting talent to help our largest and strategically important customers navigate the operational challenges and complexities of AWS Cloud. We are looking for Technical Consultants to support our customers creative and transformative spirit of innovation across all technologies, including Compute, Storage, Database, Data Analytics, Application services, Networking, Server-less and more. This is not a sales role, but rather an opportunity to be the principal technical advisor for organizations ranging from start-ups to large enterprises. As a Technical Account Manager, you will be the primary technical point of contact for one or more customers helping to plan, debug, and oversee ongoing operations of business-critical applications. You will get your hands dirty, troubleshooting application, network, database, and architectural challenges using a suite of internal AWS Cloud tools as well as your existing knowledge and toolkits. We are seeking individuals with strong backgrounds in I.T. Consulting and in any of these related areas such as Solution Designing, Application and System Development, Database Management, Big Data and Analytics, DevOps Consulting, and Media technologies. Knowledge of programming and scripting is beneficial to the role. Key job responsibilities Every day will bring new and exciting challenges on the job while you: Learn and use groundbreaking Cloud technologies. Interact with leading technologists around the world. Work on critical, highly complex customer problems that may span multiple AWS Cloud services. Apply advanced troubleshooting techniques to provide unique solutions to our customers' individual needs. Work directly with AWS Cloud subject matter experts to help reproduce and resolve customer issues. Write tutorials, how-to videos, and other technical articles for the customer community. Leverage your extensive customer support experience and provide feedback to internal AISPL teams on how to improve our services. Drive projects that improve support-related processes and our customers’ technical support experience. Assist in Design/Architecture of AWS and Hybrid cloud solutions. Help Enterprises define IT and business processes that work well with cloud deployments. Be available outside of business hours to help coordinate the handling of urgent issues as needed. About The Team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Basic Qualifications Bachelor’s Degree in Computer Science, IT, Math, or related discipline required, or equivalent work experience. 10+ years of hands-on Infrastructure / Troubleshooting / Systems Administration / Networking / DevOps / Applications Development experience in a distributed systems environment. External enterprise customer-facing experience as a technical lead, with strong oral and written communication skills, presenting to both large and small audiences. Ability to manage multiple tasks and projects in a fast-moving environment. Be mobile and travel to client locations as needed. Preferred Qualifications Advanced experience in one or more of the following areas: Software Design or Development, Content Distribution/CDN, Scripting/Automation, Database Architecture, Cloud Architecture, Cloud Migrations, IP Networking, IT Security, Big Data/Hadoop/Spark, Operations Management, Service Oriented Architecture etc. Experience in a 24x7 operational services or support environment. Experience with AWS Cloud services and/or other Cloud offerings. BASIC QUALIFICATIONS 3+ years of technical engineering experience Experience with operational parameters and troubleshooting for three (3) of the following: compute/storage/networking/CDN/databases/DevOps/big data and analytics/security/applications development in a distributed systems environment Bachelor's degree PREFERRED QUALIFICATIONS Experience with AWS services or other cloud offerings Experience in internal enterprise or external customer-facing environment as a technical lead Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - AWS India - Maharashtra Job ID: A2817050 Show more Show less
Posted 1 week ago
4.0 - 8.0 years
10 - 20 Lacs
Pune, Bengaluru
Work from Office
We are looking for skilled Hadoop and Google Cloud Platform (GCP) Engineers to join our dynamic team. If you have hands-on experience with Big Data technologies and cloud ecosystems, we want to hear from you! Key Skills: Hadoop Ecosystem (HDFS, MapReduce, YARN, Hive, Spark) Google Cloud Platform (BigQuery, DataProc, Cloud Composer) Data Ingestion & ETL pipelines Strong programming skills (Java, Python, Scala) Experience with real-time data processing (Kafka, Spark Streaming) Why Join Us? Work on cutting-edge Big Data projects Collaborate with a passionate and innovative team Opportunities for growth and learning Interested candidates, please share your updated resume or connect with us directly!
Posted 1 week ago
12.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Job Title: Technical Operations Lead Location: Indore, MP (work-from-office only) Experience: 7–12 Years Shift: 24x7 Rotational Shifts Reports To: Technical Operations Manager Qualification: B.Tech/B.E. in Computer Science, Information Technology, or a related field Job Summary: We are seeking a highly skilled and experienced Technical Operations Lead to oversee and guide a team of technical professionals in a dynamic, 24x7 production environment. The ideal candidate will have a strong background in Linux system administration, incident management, production deployments, and cross-functional coordination. This role demands a proactive leader who can manage escalations, ensure system uptime, and drive operational excellence. Key Responsibilities: Lead and mentor a team of 10+ technical operations engineers. Guide the team in handling critical situations and high-severity incidents. Manage and resolve L2/L3 technical issues, ensuring minimal downtime and quick resolution. Handle customer escalations and coordinate with internal stakeholders for timely resolution. Escalate critical issues through the appropriate hierarchical channels. Collaborate with cross-functional teams including development, QA, and infrastructure. Monitor systems using industry-standard tools and respond to alerts proactively. Guide the team in performing production deployments, patching, and release management. Maintain and troubleshoot On-premise server infrastructure. Ensure adherence to operational processes and documentation standards. Participate in 24x7 shift rotations and provide on-call support as needed. Required Skills & Qualifications: 8–12 years of experience in technical operations or system administration. Proven experience in leading and managing technical teams. Strong Linux administration skills (command-line, scripting, troubleshooting). Solid understanding of networking fundamentals (beyond basic level). Hands-on experience with monitoring tools (e.g., Nagios, Zabbix, Prometheus, etc.). Experience in incident management and root cause analysis. Familiarity with ITIL processes and escalation management. Experience in production deployment and release management. Working knowledge of Hadoop and distributed systems. Basic knowledge of Docker and Ansible . Excellent communication and coordination skills. Ability to work under pressure in a fast-paced environment. Preferred Qualifications: Certifications in Linux (RHCE, LFCS), Networking (CCNA), or ITIL. Experience with cloud platforms (AWS, Azure, GCP) is a plus. Automation/scripting experience (Shell, Python, Ansible) is an advantage. Show more Show less
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The HiLabs Story HiLabs is a leading provider of AI-powered solutions to clean dirty data, unlocking its hidden potential for healthcare transformation. HiLabs is committed to transforming the healthcare industry through innovation, collaboration, and a relentless focus on improving patient outcomes. HiLabs Team Multidisciplinary industry leaders Healthcare domain experts AI/ML and data science experts Professionals hailing from the worlds best universities, business schools, and engineering institutes including Harvard, Yale, Carnegie Mellon, Duke, Georgia Tech, Indian Institute of Management (IIM), and Indian Institute of Technology (IIT). Be a part of a team that harnesses advanced AI, ML, and big data technologies to develop cutting-edge healthcare technology platform, delivering innovative business solutions. Job Title : Data Engineer I/II Job Location : Bangalore, Karnataka , India Job summary: We are a leading Software as a Service (SaaS) company that specializes in the transformation of data in the US healthcare industry through cutting-edge Artificial Intelligence (AI) solutions. We are looking for Software Developers, who should continually strive to advance engineering excellence and technology innovation. The mission is to power the next generation of digital products and services through innovation, collaboration, and transparency. You will be a technology leader and doer who enjoys working in a dynamic, fast-paced environment. Responsibilities Design, develop, and maintain robust and scalable ETL/ELT pipelines to ingest and transform large datasets from various sources. Optimize and manage databases (SQL/NoSQL) to ensure efficient data storage, retrieval, and manipulation for both structured and unstructured data. Collaborate with data scientists, analysts, and engineers to integrate data from disparate sources and ensure smooth data flow between systems. Implement and maintain data validation and monitoring processes to ensure data accuracy, consistency, and availability. Automate repetitive data engineering tasks and optimize data workflows for performance and scalability. Work closely with cross-functional teams to understand their data needs and provide solutions that help scale operations. Ensure proper documentation of data engineering processes, workflows, and infrastructure for easy maintenance and scalability Desired Profile Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field. 3-5 years of hands-on experience as a Data Engineer or in a related data-driven role. Strong experience with ETL tools like Apache Airflow, Talend, or Informatica. Expertise in SQL and NoSQL databases (e.g., MySQL, PostgreSQL, MongoDB, Cassandra). Strong proficiency in Python, Scala, or Java for data manipulation and pipeline development. Experience with cloud-based platforms (AWS, Google Cloud, Azure) and their data services (e.g., S3, Redshift, BigQuery). Familiarity with big data processing frameworks such as Hadoop, Spark, or Flink. Experience in data warehousing concepts and building data models (e.g., Snowflake, Redshift). Understanding of data governance, data security best practices, and data privacy regulations (e.g., GDPR, HIPAA). Familiarity with version control systems like Git.. HiLabs is an equal opportunity employer (EOE). No job applicant or employee shall receive less favorable treatment or be disadvantaged because of their gender, marital or family status, color, race, ethnic origin, religion, disability, or age; nor be subject to less favorable treatment or be disadvantaged on any other basis prohibited by applicable law. HiLabs is proud to be an equal opportunity workplace dedicated to pursuing and hiring a diverse and inclusive workforce to support individual growth and superior business results. Thank you for reviewing this opportunity with HiLabs! If this position appears to be a good fit for your skillset, we welcome your application. HiLabs Total Rewards Competitive Salary, Accelerated Incentive Policies, H1B sponsorship, Comprehensive benefits package that includes ESOPs, financial contribution for your ongoing professional and personal development, medical coverage for you and your loved ones, 401k, PTOs & a collaborative working environment, Smart mentorship, and highly qualified multidisciplinary, incredibly talented professionals from highly renowned and accredited medical schools, business schools, and engineering institutes. CCPA disclosure notice - https://www.hilabs.com/privacy Show more Show less
Posted 1 week ago
100.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Entity: Technology Job Family Group: IT&S Group Job Description: You will work with To work digital delivery data group, you will apply your domain knowledge and familiarity with domain data processes to support the organisation. The data team provides daily operational data management, data engineering and analytics support to this organisation across a broad range of activity Let me tell you about the role A data analyst collects, processes, and performs analyses on a variety of datasets. Their key responsibilities include interpreting sophisticated data sets to identify trends and patterns, using analytical tools and methods to generate actionable insights, and crafting visualizations and reports to communicate those insights and recommendations to support decision-making. Data analysts collaborate closely with business domain collaborators to understand their data analysis needs, ensure data accuracy, write and recommend data-driven solutions and tackle value impacting business problems. You might be a good fit for this role if you: have strong domain knowledge in control of work data relevant to permits to work, isolation management and relevant self verification data. have experience in asset based industry or adjacent sectors is desirable Strong analytical skills and demonstrable capability in applying analytical techniques and Python scripting to solve practical problems. are curious, and keen to apply new technologies, trends & methods to improve existing standards and the capabilities of the Subsurface community. are well organized and self-motivated, you balance proactive and reactive approaches and across multiple priorities to complete tasks on time. apply judgment and common sense – you use insight and good judgment to inform actions and respond to situations as they arise. What you will deliver Be a link between asset teams and Technology, combining in-depth understanding of one or more relevant domains with data & analytics skills Provide actionable, data-driven insights by combining deep statistical skills, data manipulation capabilities and business insight. Proactively identify impactful opportunities and autonomously complete data analysis. You apply existing data & analytics strategies relevant to your immediate scope. Clean, pre-process and analyse both structured and unstructured data Develop data visualisations to analyse and interrogate broad datasets (e.g. with tools such as Microsoft PowerBI, Spotfire or similar). Present results to peers and senior management, influencing decision making What you will need to be successful (experience and qualifications) Essential MSc or equivalent experience in a quantitative field, preferably statistics. Strong domain knowledge in control of work data relevant to permits to work, isolation management and relevant self verification data. Hands-on experience carrying out data analytics, data mining and product analytics in sophisticated, fast-paced environments. Applied knowledge of data analytics and data pipelining tools and approaches across all data lifecycle stages. Deep understanding of a few and a high-level understanding of several commonly available statistics approaches. Advanced SQL knowledge. Advanced scripting experience in R or python. Ability to write and maintain moderately complex data pipelines. Customer-centric and pragmatic approach. Focus on value delivery and swift execution, while maintaining attention to detail. Good communication and social skills, with the ability to optimally communicate ideas, expectations, and feedback to team members, collaborators, and customers. Cultivate teamwork and partnership Desired Advanced analytics degree. Experience in asset based industry or adjacent sectors Experience with data technologies (e.g. Hadoop, Hive, and Spark) is a plus. About Bp Our purpose is to deliver energy to the world, today and tomorrow. For over 100 years, bp has focused on discovering, developing, and producing oil and gas in the nations where we operate. We are one of the few companies globally that can provide governments and customers with an integrated energy offering. Delivering our strategy sustainably is fundamental to achieving our ambition to be a net zero company by 2050 or sooner! Legal Disclaimer: We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, socioeconomic status, neurodiversity/neurocognitive functioning, veteran status or disability status. Individuals with an accessibility need may request an adjustment/accommodation related to bp’s recruiting process (e.g., accessing the job application, completing required assessments, participating in telephone screenings or interviews, etc.). If you would like to request an adjustment/accommodation related to the recruitment process, please contact us. If you are selected for a position and depending upon your role, your employment may be contingent upon adherence to local policy. This may include pre-placement drug screening, medical review of physical fitness for the role, and background checks. Travel Requirement Up to 10% travel should be expected with this role Relocation Assistance: This role is eligible for relocation within country Remote Type: This position is a hybrid of office/remote working Skills: Legal Disclaimer: We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, socioeconomic status, neurodiversity/neurocognitive functioning, veteran status or disability status. Individuals with an accessibility need may request an adjustment/accommodation related to bp’s recruiting process (e.g., accessing the job application, completing required assessments, participating in telephone screenings or interviews, etc.). If you would like to request an adjustment/accommodation related to the recruitment process, please contact us. If you are selected for a position and depending upon your role, your employment may be contingent upon adherence to local policy. This may include pre-placement drug screening, medical review of physical fitness for the role, and background checks. Show more Show less
Posted 1 week ago
5.0 - 8.0 years
7 - 12 Lacs
Pune
Work from Office
Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLAs defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers and clients business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLAs Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks Deliver NoPerformance ParameterMeasure 1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT 2Team ManagementProductivity, efficiency, absenteeism 3Capability developmentTriages completed, Technical Test performance Mandatory Skills: DataBricks - Data Engineering.
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Dear Associates Greetings from TATA Consultancy Services!! Thank you for expressing your interest in exploring a career possibility with the TCS Family Hiring For : Python AI ML, MlOPs Must Have : Spark, Hadoop,PyTorch, TensorFlow,Matplotlib, Seaborn, Tableau, Power BI,scikit-learn, TensorFlow, XGBoost,AWS,Azure , AWS, Databricks,Pyspark, Python,SQL, Snowflake, Experience: 5+ yrs Location : Mumbai / Pune If interested kindly fill the details and send your resume at nitu.sadhukhan@tcs.com . Note: only Eligible candidates with Relevant experience will be contacted further Name Contact No: Email id: Current Location: Preferred Location: Highest Qualification (Part time / Correspondence is not Eligible) : Year of Passing (Highest Qualification): Total Experience: Relevant Experience : Current Organization: Notice Period: Current CTC: Expected CTC: Pan Number : Gap in years if any (Education / Career): Updated CV attached (Yes / No) ? IF attended any interview with TCS in last 6 months : Available For walk In drive on 14th June _Pune : Thanks & Regards, Nitu Sadhukhan Talent Acquisition Group Tata Consultancy Services Lets Connect : linkedin.com/in/nitu-sadhukhan-16a580179 Nitu.sadhukhan@tcs.com Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The demand for Hadoop professionals in India has been on the rise in recent years, with many companies leveraging big data technologies to drive business decisions. As a job seeker exploring opportunities in the Hadoop field, it is important to understand the job market, salary expectations, career progression, related skills, and common interview questions.
These cities are known for their thriving IT industry and have a high demand for Hadoop professionals.
The average salary range for Hadoop professionals in India varies based on experience levels. Entry-level Hadoop developers can expect to earn between INR 4-6 lakhs per annum, while experienced professionals with specialized skills can earn upwards of INR 15 lakhs per annum.
In the Hadoop field, a typical career path may include roles such as Junior Developer, Senior Developer, Tech Lead, and eventually progressing to roles like Data Architect or Big Data Engineer.
In addition to Hadoop expertise, professionals in this field are often expected to have knowledge of related technologies such as Apache Spark, HBase, Hive, and Pig. Strong programming skills in languages like Java, Python, or Scala are also beneficial.
As you navigate the Hadoop job market in India, remember to stay updated on the latest trends and technologies in the field. By honing your skills and preparing diligently for interviews, you can position yourself as a strong candidate for lucrative opportunities in the big data industry. Good luck on your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.