Home
Jobs

4958 Hadoop Jobs - Page 48

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary A career within…. A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities Design and implement scalable, efficient, and secure data pipelines on GCP, utilizing tools such as BigQuery, Dataflow, Dataproc, Pub/Sub, and Cloud Storage. Collaborate with cross-functional teams (data scientists, analysts, and software engineers) to understand business requirements and deliver actionable data solutions. Develop and maintain ETL/ELT processes to ingest, transform, and load data from various sources into GCP-based data warehouses. Build and manage data lakes and data marts on GCP to support analytics and business intelligence initiatives. Implement automated data quality checks, monitoring, and alerting systems to ensure data integrity.  Optimize and tune performance for large-scale data processing jobs in BigQuery, Dataflow, and other GCP tools. Create and maintain data pipelines to collect, clean, and transform data for analytics and machine learning purposes. Ensure data governance and compliance with organizational policies, including data security, privacy, and access controls. Stay up to date with new GCP services and features and make recommendations for improvements and new implementations. Mandatory Skill Sets GCP, Big query , Data Proc Preferred Skill Sets GCP, Big query , Data Proc, Airflow Years Of Experience Required 4-7 Education Qualification B.Tech / M.Tech / MBA / MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Business Administration, Master of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Good Clinical Practice (GCP) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline, Data Quality, Data Transformation, Data Validation {+ 18 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary A career within…. A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities Design and implement scalable, efficient, and secure data pipelines on GCP, utilizing tools such as BigQuery, Dataflow, Dataproc, Pub/Sub, and Cloud Storage. Collaborate with cross-functional teams (data scientists, analysts, and software engineers) to understand business requirements and deliver actionable data solutions. Develop and maintain ETL/ELT processes to ingest, transform, and load data from various sources into GCP-based data warehouses. Build and manage data lakes and data marts on GCP to support analytics and business intelligence initiatives. Implement automated data quality checks, monitoring, and alerting systems to ensure data integrity.  Optimize and tune performance for large-scale data processing jobs in BigQuery, Dataflow, and other GCP tools. Create and maintain data pipelines to collect, clean, and transform data for analytics and machine learning purposes. Ensure data governance and compliance with organizational policies, including data security, privacy, and access controls. Stay up to date with new GCP services and features and make recommendations for improvements and new implementations. Mandatory Skill Sets GCP, Big query , Data Proc Preferred Skill Sets GCP, Big query , Data Proc, Airflow Years Of Experience Required 4-7 Education Qualification B.Tech / M.Tech / MBA / MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Business Administration, Master of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Good Clinical Practice (GCP) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline, Data Quality, Data Transformation, Data Validation {+ 18 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less

Posted 1 week ago

Apply

3.0 - 5.0 years

5 - 12 Lacs

Coimbatore

On-site

Job Type: Full-time, Permanent Job mode: On-Site/Hybrid Joining: Open to immediate joiners and candidates available within 1-2 weeks. Sense7AI Data Solutions is seeking a highly skilled and forward-thinking AI/ML Engineer to join our dynamic team. You will play a critical role in designing, developing, and deploying state-of-the-art AI solutions using both classical machine learning and cutting-edge generative AI technologies. The ideal candidate is not only technically proficient but also deeply familiar with modern AI tools, frameworks, and prompt engineering strategies. Key Responsibilities Design, build, and deploy end-to-end AI/ML solutions tailored to real-world business challenges. Leverage the latest advancements in Generative AI, LLMs (e.g., GPT, Claude, LLaMA), and multimodal models for intelligent applications. Develop, fine-tune, and evaluate custom language models using transfer learning and prompt engineering. Work with traditional ML models and deep learning architectures (CNNs, RNNs, Transformers) for diverse applications such as NLP, computer vision, and time-series forecasting. Create and maintain scalable ML pipelines using MLOps best practices. Collaborate with cross-functional teams (data engineers, product managers, business analysts) to understand domain needs and translate them into AI solutions. Stay current on the evolving AI landscape, including open-source tools, academic research, cloud-native AI services, and responsible AI practices. Ensure AI model transparency, fairness, bias mitigation, and compliance with data governance standards. Required Skills & Qualifications Education: Any degree in Computer Science, Artificial Intelligence, Data Science, or a related field. Experience: 3 – 5 years of hands-on experience in AI/ML solution development, model deployment, and experimentation. Technical Proficiency: Programming: Python (strong), familiarity with Bash/CLI, and Git . Frameworks: TensorFlow, PyTorch, Hugging Face Transformers, Scikit-learn . GenAI & LLM Tools: LangChain, OpenAI APIs, Anthropic, Vertex AI, PromptLayer, Weights & Biases. Prompt Engineering: Experience crafting, testing, and optimizing prompts for LLMs across multiple platforms. Cloud & MLOps: AWS/GCP/Azure (SageMaker, Vertex AI, Azure ML), Docker, Kubernetes, MLflow. Data: SQL, NoSQL, BigQuery, Spark, Hadoop; data wrangling, cleansing, and feature engineering. Strong grasp of model evaluation techniques, fine-tuning strategies, and A/B testing. Preferred Qualifications Experience with AutoML , reinforcement learning, vector databases (e.g., Milvus, FAISS), or RAG (Retrieval-Augmented Generation). Familiarity with deploying LLMs and GenAI systems in production environments. Hands-on experience with open-source LLMs and fine-tuning (e.g., LLaMA, Mistral, Falcon, Open LLaMA). Understanding of AI compliance, data privacy, ethical AI, and explainability (XAI). Strong problem-solving skills and the ability to work in fast-paced, evolving tech landscapes. Excellent written and verbal communication in English. Job Types: Full-time, Permanent Pay: ₹500,000.00 - ₹1,200,000.00 per year Benefits: Flexible schedule Health insurance Paid time off Provident Fund Experience: Machine learning: 3 years (Preferred) AI: 3 years (Preferred) Work Location: In person

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within…. A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities Design, develop, and optimize data pipelines and ETL processes using PySpark or Scala to extract, transform, and load large volumes of structured and unstructured data from diverse sources. Implement data ingestion, processing, and storage solutions on Azure cloud platform, leveraging services such as Azure Databricks, Azure Data Lake Storage, and Azure Synapse Analytics. Develop and maintain data models, schemas, and metadata to support efficient data access, query performance, and analytics requirements. Monitor pipeline performance, troubleshoot issues, and optimize data processing workflows for scalability, reliability, and cost-effectiveness. Implement data security and compliance measures to protect sensitive information and ensure regulatory compliance. Requirement Proven experience as a Data Engineer, with expertise in building and optimizing data pipelines using PySpark, Scala, and Apache Spark. Hands-on experience with cloud platforms, particularly Azure, and proficiency in Azure services such as Azure Databricks, Azure Data Lake Storage, Azure Synapse Analytics, and Azure SQL Database. Strong programming skills in Python and Scala, with experience in software development, version control, and CI/CD practices. Familiarity with data warehousing concepts, dimensional modeling, and relational databases (e.g., SQL Server, PostgreSQL, MySQL). Experience with big data technologies and frameworks (e.g., Hadoop, Hive, HBase) is a plus. Mandatory Skill Sets Spark, Pyspark, Azure Preferred Skill Sets Spark, Pyspark, Azure Years Of Experience Required 4 - 8 Education Qualification B.Tech / M.Tech / MBA / MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Engineering, Master of Business Administration, Bachelor of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills PySpark Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline, Data Quality, Data Transformation, Data Validation {+ 18 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less

Posted 1 week ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities: Job title: Data Engineer About The Role: As a Junior/Senior Data Engineer, you'll be taking the lead in designing and maintaining complex data ecosystems. Your experience will be instrumental in optimizing data processes, ensuring data quality, and driving data-driven decision-making within the organization. Responsibilities: Architecting and designing complex data systems and pipelines. Leading and mentoring junior data engineers and team members. Collaborating with cross-functional teams to define data requirements. Implementing advanced data quality checks and ensuring data integrity. Optimizing data processes for efficiency and scalability. Overseeing data security and compliance measures. Evaluating and recommending new technologies to enhance data infrastructure. Providing technical expertise and guidance for critical data projects. Required Skills & Experience: Proficiency in designing and building complex data pipelines and data processing systems. Leadership and mentorship capabilities to guide junior data engineers and foster skill development. Strong expertise in data modeling and database design for optimal performance. Skill in optimizing data processes and infrastructure for efficiency, scalability, and cost-effectiveness. Knowledge of data governance principles, ensuring data quality, security, and compliance. Familiarity with big data technologies like Hadoop, Spark, or NoSQL. Expertise in implementing robust data security measures and access controls. Effective communication and collaboration skills for cross-functional teamwork and defining data requirements. Skills: Cloud: Azure/GCP/AWS DE Technologies: ADF, Big Query, AWS Glue etc., Data Lake: Snowflake, Data Bricks etc., Mandatory Skill Sets: Cloud: Azure/GCP/AWS DE Technologies: ADF, Big Query, AWS Glue etc., Data Lake: Snowflake, Data Bricks etc. Preferred Skill Sets: Cloud: Azure/GCP/AWS DE Technologies: ADF, Big Query, AWS Glue etc., Data Lake: Snowflake, Data Bricks etc. Years Of Experience Required: 2-4 years Education Qualification: BE/BTECH, ME/MTECH, MBA, MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Bachelor of Engineering, Master of Engineering Degrees/Field Of Study Preferred: Certifications (if blank, certifications not specified) Required Skills AWS Glue, Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Apache Hadoop, Azure Data Factory, Communication, Data Anonymization, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline, Data Quality, Data Transformation, Data Validation, Data Warehouse, Data Warehouse Indexing {+ 13 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date Show more Show less

Posted 1 week ago

Apply

5.0 - 8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Description Summary Job Description Summary The Sr Data Analyst - BI & Reporting will play a critical role in developing end-to-end reporting solutions, from data collection and transformation to report generation and visualization. This role involves working on the cutting edge of data engineering and analytics, leveraging machine learning, predictive modeling, and generative AI to drive business insights. The analyst will enable informed decision-making through real-time, optimized analytics and reporting capabilities, supporting the organization's data-driven initiatives, with a strong focus on financial forecasting and business performance insights. Join us if you are result-oriented, strategic, collaborative, customer-focused, detailed, persistent, and a self-starter. In addition, you should thrive in ambiguous environments, love multitasking, and have a deep passion for BI & analytics. You bring appropriate Business Intelligence (BI) skillsets along with understanding of Business Functions fundaments to define, drive and deliver the KPIs which matter to the business. The role encompasses independently creating self-serve, compelling & impactful dashboards aligned to the Business Functions Org priorities. Job Description Roles and Responsibilities Design and create visualizations, develop, execute, and maintain regular dashboards/reports using PowerBI and Tableau. Engage with stakeholders, business function leads, and project teams to understand data requirements and business needs Design analytical frameworks to solve business problems, utilizing predictive models and machine learning techniques. Explore, clean, and visualize data sets to prepare for analysis, ensuring data quality and consistency. Build advanced data models and pipelines using SQL and other tools. Develop and maintain BI semantic data models for large-scale data warehouses/Data Lakes. Develop customized visuals and layouts that prioritize key KPIs, providing clear and concise insights that align with business objectives. Implement UX best practices to improve navigation, data storytelling, and the overall usability of dashboards, ensuring that reports are actionable and user-friendly. Present data-driven insights and recommendations through effective storytelling using dashboards, reports, and presentations. Be the subject matter expert and demonstrate leadership on standardized reporting, experimentation, and analyses to support performance discussions. Desired Characteristics & Technical Expertise 5 to 8 years of relevant experience with strong understanding in data analytics, reporting, and BI development. Very strong proficiency in data visualization tools (PowerBI and Tableau) along with advanced Excel skills. Hands-on experience with predictive modeling, machine learning algorithms, and financial forecasting techniques. Expertise in SQL for data transformation, with experience in Hadoop or similar big data technologies preferred. Experience mapping data flows from source systems to final reporting repositories (Data Warehouses, Data Lakes, etc.). Knowledge of descriptive, diagnostic, predictive, and prescriptive analytics approaches. Skilled in data architecture design, implementation best practices, and continuous optimization. Collaborate with stakeholders to gather requirements and iterate on designs, optimizing visual elements based on user feedback to drive better decision-making. Develop advanced visualization techniques to highlight trends, patterns, and outliers, making complex data easily understandable for various audiences. Ensure that visuals adhere to data governance standards, incorporating appropriate drill-down capabilities and interactivity for in-depth analysis. Regularly review and update reports to reflect evolving business needs and continuously enhance the reporting experience. Demonstrated ability to organize, prioritize, and deliver tasks with high attention to detail, especially in dynamic environments. Self-driven and open to exploring alternate technologies and methods for problem-solving. Strong communication skills, capable of translating technical insights into business language. Business Acumen Understands the technology landscape, up to date on current technology trends and new technology, brings new ideas to the team. Displays understanding of the project's value proposition for the customer. Shows commitment to deliver the best value proposition for the targeted customer. Learns organization vision statement and decision-making framework. Able to understand how team and personal goals/objectives contribute to the organization vision Leadership Attributes Voices opinions and presents clear rationale. Uses data or factual evidence to influence. Learns organization vision statement and decision-making framework. Able to understand how team and personal goals/objectives contribute to the organization vision. Completes assigned tasks on time and with high quality. Takes independent responsibility for assigned deliverables. Seeks to understand problems thoroughly before implementing solutions. Asks questions to clarify requirements when ambiguities are present. Identifies opportunities for innovation and offers new ideas. Takes the initiative to experiment with new software frameworks Adapts to new environments and changing requirements. Pivots quickly as needed. When coached, responds to need & seeks info from other sources. Additional Information Relocation Assistance Provided: Yes Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities Design, develop, and optimize data pipelines and ETL processes using PySpark or Scala to extract, transform, and load large volumes of structured and unstructured data from diverse sources. Implement data ingestion, processing, and storage solutions on Azure cloud platform, leveraging services such as Azure Databricks, Azure Data Lake Storage, and Azure Synapse Analytics. Develop and maintain data models, schemas, and metadata to support efficient data access, query performance, and analytics requirements. Monitor pipeline performance, troubleshoot issues, and optimize data processing workflows for scalability, reliability, and cost-effectiveness. Implement data security and compliance measures to protect sensitive information and ensure regulatory compliance. Requirement Design, develop, and optimize data pipelines and ETL processes using PySpark or Scala to extract, transform, and load large volumes of structured and unstructured data from diverse sources. Implement data ingestion, processing, and storage solutions on Azure cloud platform, leveraging services such as Azure Databricks, Azure Data Lake Storage, and Azure Synapse Analytics. Develop and maintain data models, schemas, and metadata to support efficient data access, query performance, and analytics requirements. Monitor pipeline performance, troubleshoot issues, and optimize data processing workflows for scalability, reliability, and cost-effectiveness. Implement data security and compliance measures to protect sensitive information and ensure regulatory compliance. Requirement Proven experience as a Data Engineer, with expertise in building and optimizing data pipelines using PySpark, Scala, and Apache Spark. Hands-on experience with cloud platforms, particularly Azure, and proficiency in Azure services such as Azure Databricks, Azure Data Lake Storage, Azure Synapse Analytics, and Azure SQL Database. Strong programming skills in Python and Scala, with experience in software development, version control, and CI/CD practices. Familiarity with data warehousing concepts, dimensional modeling, and relational databases (e.g., SQL Server, PostgreSQL, MySQL). mExperience with big data technologies and frameworks (e.g., Hadoop, Hive, HBase) is a plus. Mandatory skill sets: Spark, Pyspark, Azure Preferred Skill Sets Spark, Pyspark, Azure Years Of Experience Required 4-7 Education Qualification B.Tech / M.Tech / MBA / MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Business Administration, Master of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Azure Data Lake Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline, Data Quality, Data Transformation, Data Validation {+ 18 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Line of Service Advisory Industry/Sector Not Applicable Specialism SAP Management Level Senior Associate Job Description & Summary A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities Design, develop, and optimize data pipelines and ETL processes using PySpark or Scala to extract, transform, and load large volumes of structured and unstructured data from diverse sources. Implement data ingestion, processing, and storage solutions on Azure cloud platform, leveraging services such as Azure Databricks, Azure Data Lake Storage, and Azure Synapse Analytics. Develop and maintain data models, schemas, and metadata to support efficient data access, query performance, and analytics requirements. Monitor pipeline performance, troubleshoot issues, and optimize data processing workflows for scalability, reliability, and cost-effectiveness. Implement data security and compliance measures to protect sensitive information and ensure regulatory compliance. Requirement Proven experience as a Data Engineer, with expertise in building and optimizing data pipelines using PySpark, Scala, and Apache Spark. Hands-on experience with cloud platforms, particularly Azure, and proficiency in Azure services such as Azure Databricks, Azure Data Lake Storage, Azure Synapse Analytics, and Azure SQL Database. Strong programming skills in Python and Scala, with experience in software development, version control, and CI/CD practices. Familiarity with data warehousing concepts, dimensional modeling, and relational databases (e.g., SQL Server, PostgreSQL, MySQL). Experience with big data technologies and frameworks (e.g., Hadoop, Hive, HBase) is a plus. Mandatory Skill Sets Spark, Pyspark, Azure Preferred Skill Sets Spark, Pyspark, Azure Years Of Experience Required 4-7 Education Qualification B.Tech / M.Tech / MBA / MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Engineering, Bachelor of Engineering, Master of Business Administration Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Azure Data Factory, Databricks Platform, Data Lake, Microsoft Azure, PySpark Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Budgetary Management, Business Decisions, Communication, Complex Data Analysis, Cost Reduction, Creating Budgets, Creativity, Data-Driven Insights, Data Visualization, Data Visualization Software, Due Diligence Research, Embracing Change, Emotional Regulation, Empathy, Financial Data Mining, Financial Forecasting, Financial Management, Financial Management Software, Financial Modeling, Financial Planning, Financial Reporting {+ 16 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Manager Job Description & Summary A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Creating business intelligence from data requires an understanding of the business, the data, and the technology used to store and analyse that data. Using our Rapid Business Intelligence Solutions, data visualisation and integrated reporting dashboards, we can deliver agile, highly interactive reporting and analytics that help our clients to more effectively run their business and understand what business questions can be answered and how to unlock the answers. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary : Manager Advanced Analytics & ML – Financial Services Responsibilities This is for an opening at the Manager level in the Data and Analytics division at PwC India. The role will be centered around Financial services domain. A successful candidate is expected to work pro-actively and effectively on multiple client engagements and take ownership of the entire project delivery. This includes having project management skills, technical and/or functional expertise, and commitment to comply with the PwC delivery quality expectation. Further, this role requires a candidate with strong interpersonal skills, who not only enjoys the challenge of working with other teams but externally with a variety of clients as well. Strong personal and professional presence and self-confidence, capable of working effectively with senior team as well as all other levels. The candidate will be required to showcase excellent communication skills and will have demonstrated consistently the skill and capability in delivering impactful and insightful projects in the past. He/she will also be required to participate in client meetings, understand the business needs and then design end to end machine learning and analytics solutions to fulfill those business needs. They will also be expected to contribute to practice or Firm development. This may be adjudged in various ways such as serving as a mentor to other team members, by leading training/development initiatives, contributing to thought leadership papers, and developing reusable assets. Detailed role and responsibilities are provided below: Roles and Responsibilities: Develop, Review and implement Solutions applying advanced analytics techniques including but not restricted to Machine Learning, Deep Learning, AI, NLP and Visualization Troubleshoot, isolate and remediate model errors. Work on and manage large to mid-size projects, and ensure smooth service delivery on assigned products, engagements and/or geographies. Work with project leaders to analyse resource needs and gaps and devise alternative ways forward. Provide expert reviews for all projects within the assigned subject Ability to lead business development initiatives including responding to RFPs, preparation and delivery of client presentations with the objective of sales and business development. Ability to manage cross functional teams and mentor junior team members Understanding of statistical methods to enable appropriate interpretation of results Experience analyzing programs through the lens of client requirements and design optimal solutions for fulfilling those requirements. Conceptual thinking and ability to find innovative ways to solve analytical problems Should have worked on multiple analytics consulting and implementation projects Skills & Qualifications Required: Minimum 8-10 years of experience in advanced analytics and machine learning space with at least 4-6 years of experience in financial services. Advanced understanding and hands-on experience in SQL and at least one of R or Python. Basic exposure to other statistical packages such as R, Python and SAS. Deep understanding of predictive algorithms such as logistic regression, linear regression, decision trees, random forest, xgboost, SVM etc. and clustering algorithms such as k-means, k nearest neighbour, hierarchical etc. Hands on experience in NLP and deep understanding of text analytics algorithms and modelling workflow Good knowledge of neural networks, deep learning, RNN, CNN, LSTM etc, Working knowledge of big data environment setups such as Spark, PySpark, Hive, Hadoop etc. Advanced understanding of Cloud (AWS, Azure, etc.) and good exposure to Azure/AWS machine learning workbench and MLOps workflow. Excellent verbal and written communication skills. Experienced in creating power point presentations, dashboards, solving complicated client problems and communicating the precise insights. Strong organizational skills & the ability to prioritize and work on projects with great efficiency & attention to the details. Minimum qualification – B.E; MBA is preferred (with at least 5 years of exp. post MBA) Mandatory Skill Sets Advanced Analytics & ML Preferred Skill Sets Advanced Analytics & ML Years Of Experience Required 9+ Education Qualification BTech/MBA/MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Business Administration Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Advanced Analytics Optional Skills Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

New Delhi, Delhi, India

On-site

Linkedin logo

About Us Astra Capital is a pioneering quantitative trading firm that transforms data and sophisticated algorithms into market-beating strategies. Our team embraces the rapid pace of today’s financial landscape, constantly refining our systems for maximum speed, reliability, and edge. We’re looking for a talented Software Developer proficient in C++/Rust to architect and optimize the ultra–low-latency infrastructure that powers our trading operations. Position Overview As a Senior Software Developer, you will play a pivotal role in designing, developing, and optimizing trading systems and infrastructure. You will work directly with quantitative researchers, traders, and other engineers to build and maintain technology solutions that drive our trading strategies. This is a unique opportunity to work at the intersection of technology and finance. Key Responsibilities System Design & Implementation: Create and enhance high-frequency trading engines in C++/Rust that span multiple venues. Performance Tuning: Conduct algorithmic research, profiling, and benchmarking to drive continuous latency improvements. Tooling & Observability: Build dashboards, alerting, and administrative interfaces for real-time system visibility. Full-Lifecycle Ownership: Lead requirements gathering, coding, testing, deployment, and maintenance of core modules. Cross-Functional Collaboration: Work closely with quants, traders, and fellow engineers to align technology with strategy. Innovation & Growth: Stay current on language features, libraries, and best practices in systems programming and fintech. Qualifications Must Have: Academic Background: BS/MS in Computer Science, Engineering, or related discipline Professional Experience: 3–5 years in C++/Rust development, object-oriented design. Systems Knowledge: Data structures & Algorithms, distributed/low-latency systems, multi-threading, parallel computing Coding Excellence: Clean, efficient, maintainable code with comprehensive documentation Operational Skills: Linux proficiency, networking fundamentals, production debugging Soft Skills: Analytical mindset, meticulous attention to detail, strong teamwork and communication Good To Have: Financial technology or HFT industry background Insight into market microstructure, trading protocols, TCP/IP (other network technologies) Python for scripting; experience with other languages (Go, Java, R, Bash) Experience with big data technologies such as Hadoop/Spark. Why Join Astra Capital? Exceptional opportunity to work with a dedicated, talented, and driven team. Opportunity to be a foundational member of a growing quantitative fund. Attractive base salary and benefits package, including performance-based bonuses. Outstanding company culture: relaxed, highly professional, innovative, and entrepreneurial. Leverage the growth of the Indian market to generate exponential value for yourself. Astra Capital is where serious tech meets epic fun. Help us build lightning-fast trading systems, enjoy competitive pay and killer bonuses, and high-five your team as we conquer India’s hot markets together. Ready to make waves? Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Preferred Education Master's Degree Required Technical And Professional Expertise Experience with Apache Spark (PySpark): In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data Technologies: Familiarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modelling, and data warehousing concepts. Strong proficiency in Python: Expertise in Python programming with a focus on data processing and manipulation. Data Processing Frameworks: Knowledge of data processing libraries such as Pandas, NumPy. SQL Proficiency: Experience writing optimized SQL queries for large-scale data analysis and transformation. Cloud Platforms: Experience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred Technical And Professional Experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Design, develop, maintain efficient and scalable solutions using PySpark Ensure data quality and integrity by implementing robust testing, validation and cleansing processes Integrate data from various sources, including databases, APIs, external datasets etc. Optimize and tune PySpark jobs for performance and reliability Document data engineering processes, workflows and best practices Strong understanding of databases, data modelling, and ETL tools and processes String programming skills in python and proficiency with PySpark, SQL Experience with relational databases, Hadoop, Spark, Hive, AWS Excellent communication and collaboration skills Show more Show less

Posted 1 week ago

Apply

7.0 - 9.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description: About Us At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities, and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being. Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization. Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us! Global Business Services Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations. Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence, and innovation. In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services. Process Overview* Wholesale Risk Analytics (WRA) provides decision support services to enable our banking and markets, wealth management, capital management and credit risk business partners to manage risk well. To accomplish this, we build, develop and maintain models for risk and loss measurement. Additionally, we produce and administer forecasts supporting allowance and stress-testing both domestically (US) and internationally. Our teams complement each other and benefit from broad and diverse employment and experience within industries including government, academia, finance, law, consulting and the military. Through developing our people, embracing operational excellence to drive process improvements, and investing in strategic data initiatives, we provide enhanced risk management capabilities, insights and analysis to our business partners. Job Description* Wholesale Risk Analytics (WRA) is seeking an eager candidate ready to delve into the world of transformative model implementation and technology infrastructure. The hired candidate will have the opportunity to explore software development and data infrastructure on the strategic GRA Core platform (GCP). The hired candidate will have the opportunity to collaborate with GRA and Technology teams, leverage software engineering and model development skills, and gain expertise on how meet business and regulatory requirements in a fast-paced environment. This role requires a strong technical acumen, encompassing data analysis, computer programming, organization, good verbal and written communication, and software design capabilities. You will be at the forefront of Python and PySpark software development, and interface with Hadoop, HIVE and Exadata for model data input and output integration. Responsibilities* The role involves adhering to software standards and controls established by the model development team for model implementation, along with developing model execution code and operational functionalities that support models in production, including GRA Core Platform (GCP) workflows. In addition, the role will entail Python implementation of new and existing models, analytic processes, or system approaches. This role will also be instrumental in constructing a strategic model infrastructure using Python and PySpark, preparing code documentation, and collaborating with technology teams to deploy code in production. The role requires strong quantitative and computer programming skills (specifically in Python), and the capability of understanding and contributing to software design and GRA code control standards. Requirements* Education* BE / MCA Certifications If Any: NA Experience Range* 7 to 9 Years Foundational Skills* 3 years of hands-on experience Python and PySpark programming 2 years of programming experience with databases such as Oracle, Hadoop, and/or HIVE 2 years of JIRA and Git experience for software development Proficiency in writing functional specifications for software implementation Exceptional analytical and problem-solving skills Basic Software engineering and design skills with a commitment to bust software development practices A strategic thinker with aptitude to leverage technical skills to solve business problems Experience in designing, developing, and implementing scalable infrastructure solutions Desired Skills* Capable of documenting processes, inputs, outputs, and requirements, along with identify gaps and improve workflow. Work Timings* 11:30 AM - 8:30 PM Job Location* Hyderabad Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About the role We are looking for a savvy Data Engineer to join our growing team of analytics experts. The hire will be responsible for expanding and optimizing our data and data pipeline architecture; as well as optimizing data flow and collection for cross-functional teams. The ideal candidate is an experienced data pipeline design and data wrangler who enjoys optimizing data systems and building them from the ground up. The Data Engineer will lead our software developers; database architects; data analysts; and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams; systems; and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives. You will be responsible for Responsibilities - Create and maintain optimal data pipeline architecture; - Assemble large; complex data sets that meet functional / non-functional business requirements. - Identify; design; and implement internal process improvements: automating manual processes; optimizing data delivery; re-designing infrastructure for greater scalability; etc. - Build the infrastructure required for optimal extraction; transformation; and loading of data from a wide variety of data sources - Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition; operational efficiency; and other key business performance metrics. - Work with stakeholders including the Executive; Product; Data; and Design teams to assist with data-related technical issues and support their data infrastructure needs. - Keep our data separated and secure - Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. - Work with data and analytics experts to strive for greater functionality in our data systems. You will need Mandatory skills: Hadoop; hive; Spark; any stream processing; Scala/Java; Kafka; and containerization/Kubernetes Good to have skills are Functional programming; Kafka Connect; Spark streaming; Helm Charts; hands-on experience in Kubernetes Whats in it for you? At Tesco, we are committed to providing the best for you. As a result, our colleagues enjoy a unique, d ifferentiated, market- competitive reward package, based on the current industry practices, for all the work they put into serving our customers, communities and planet a little better every day. Our Tesco Rewards framework consists of pillars - Fixed Pay, Incentives, and Benefits. Total Rewards offered at Tesco is determined by four principles -simple, fair, competitive, and sustainable. * Your fixed pay is the guaranteed pay as per your contract of employment. * Colleagues are entitled to 30 days of leave (18 days of Earned Leave, 12 days of Casual/Sick Leave) and 10 national and festival holidays, as per the company’s policy. * In addition to Statutory retirement beneets, Tesco enables colleagues to participate in voluntary programmes like NPS and VPF. * Tesco promotes programmes that support a culture of health and wellness including insurance for colleagues and their family. Our medical insurance provides coverage for dependents including parents or in-laws. * We offer mental health support through self-help tools, community groups, ally networks, face-to-face counselling, and more for both colleagues and dependents. * Through our financial literacy partner, we offer one-to-one financial coaching at discounted rates, as well as salary advances on earned wages upon request. * Our SAYE programme allows colleagues to transition from being employees to Tesco shareholders through a structured 3-year savings plan. * Our green campus promotes physical wellbeing with facilities that include a cricket pitch, football field, badminton and volleyball courts, along with indoor games, encouraging a healthier lifestyle. About Us Tesco Bengaluru We are a multi-disciplinary team creating a sustainable competitive advantage for Tesco by standardising processes, delivering cost savings, enabling agility, providing cutting-edge technological solutions and empowering our colleagues to do ever more for our customers. With cross-functional expertise in Global Business Services and Retail Technology & Engineering, a wide network of teams and strong governance we reduce complexity thereby offering high quality services for our customers. Tesco Bengaluru, established in 2004 to enable standardisation and build centralised capabilities and competencies, makes the experience better for our millions of customers worldwide and simpler for over 4,40,000 colleagues. At Tesco Business Solutions, we have a mission to simplify, scale & partner to serve our customers, colleagues and suppliers through a best-in-class intelligent Business Services model. We do this by building a world class business services model by executing services model framework right at the heart of everything we do for our worldwide customers. The key objective is to implement and execute service model across all our functions and markets consistently. The ethos of business services is to free-up our colleagues from a regular manual operational work. We use cognitive technology to augment our key decision making. We also built a Continuous Improvement (CI) culture across functions to drive bottom-up business efficiencies by optimising processes. Business services colleagues need to act as a business partner with our group stakeholders to build a collaborative partnership driving continuous improvement across markets and functions to lead the best customer experience by serving our shoppers a little better every day. At Tesco, inclusion means that Everyone's Welcome. Everyone is treated fairly and with respect; by valuing individuality and uniqueness we create a sense of belonging. Diversity and inclusion have always been at the heart of Tesco. It is embedded in our values: we treat people how they want to be treated. We always want our colleagues to feel they can be themselves at work and we are committed to helping them be at their best. Across the Tesco group we are building an inclusive workplace, a place to actively celebrate the cultures, personalities and preferences of our colleagues, who in turn help to build the success of our business and reflect the diversity of the communities we serve. Show more Show less

Posted 1 week ago

Apply

2.0 - 4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About the role Responsible to provide support via automation while devising efficient reporting solutions in alignment with customer and business needs You will be responsible for - Understands business needs and in depth understanding of Tesco processes- Accountable for high quality and timely completion of specified reporting & dash-boarding work- Understanding the end to end process of generating reports- Understanding the underlying data sources- Action any change request received from partners- Develop users manual for reporting procedures and related process changes- Handle new report development requests- Lead the transformation of reports into new age tools and technologies- Provide solutions to issues related to reports development and delivery- Maintain the log of issues, risks and mitigation plans- Identifying operational improvements and apply solution and automation using Python, Alteryx- Enhance and Develop Daily, Weekly and Periodic reports and dashboards using Advanced excel, Advanced SQL, Hadoop, Teradata- Partnering with stakeholders to identify problems, collaborate with them to brainstorm on the best possible reporting solution, and deliver solutions in the form of intelligent business reports / dashboards (Tableau, BI)- Following our Business Code of Conduct and always acting with integrity and due diligence You will need - 2-4 year Experience in analytics delivery in any one of domains like retail, cpg, telecom or hospitality and for one of the following functional areas - marketing, supply chain, customer, merchandising, operations, finance or digital preferred- Adv Excel, - Strong Verbal and Written Communication- Adv SQL, Big Data Infra, Hadoop, Hive, Phython, Spark- Automation platforms Alteryx/Python - Advanced Developer knowledge of Tableau, PowerBI, - Logical Reasoning- Eye for detail Whats in it for you? At Tesco, we are committed to providing the best for you. As a result, our colleagues enjoy a unique, differentiated, market- competitive reward package, based on the current industry practices, for all the work they put into serving our customers, communities and planet a little better every day. Our Tesco Rewards framework consists of pillars - Fixed Pay, Incentives, and Benefits. Total Rewards offered at Tesco is determined by four principles - simple, fair, competitive, and sustainable. Salary - Your fixed pay is the guaranteed pay as per your contract of employment. Performance Bonus - Opportunity to earn additional compensation bonus based on performance, paid annually Leave & Time-off - Colleagues are entitled to 30 days of leave (18 days of Earned Leave, 12 days of Casual/Sick Leave) and 10 national and festival holidays, as per the company’s policy. Making Retirement Tension-FreeSalary - In addition to Statutory retirement beneets, Tesco enables colleagues to participate in voluntary programmes like NPS and VPF. Health is Wealth - Tesco promotes programmes that support a culture of health and wellness including insurance for colleagues and their family. Our medical insurance provides coverage for dependents including parents or in-laws. Mental Wellbeing - We offer mental health support through self-help tools, community groups, ally networks, face-to-face counselling, and more for both colleagues and dependents. Financial Wellbeing - Through our financial literacy partner, we offer one-to-one financial coaching at discounted rates, as well as salary advances on earned wages upon request. Save As You Earn (SAYE) - Our SAYE programme allows colleagues to transition from being employees to Tesco shareholders through a structured 3-year savings plan. Physical Wellbeing - Our green campus promotes physical wellbeing with facilities that include a cricket pitch, football field, badminton and volleyball courts, along with indoor games, encouraging a healthier lifestyle. About Us Tesco in Bengaluru is a multi-disciplinary team serving our customers, communities, and planet a little better every day across markets. Our goal is to create a sustainable competitive advantage for Tesco by standardising processes, delivering cost savings, enabling agility through technological solutions, and empowering our colleagues to do even more for our customers. With cross-functional expertise, a wide network of teams, and strong governance, we reduce complexity, thereby offering high-quality services for our customers. Tesco in Bengaluru, established in 2004 to enable standardisation and build centralised capabilities and competencies, makes the experience better for our millions of customers worldwide and simpler for over 3,30,000 colleagues. Tesco Business Solutions: Established in 2017, Tesco Business Solutions (TBS) has evolved from a single entity traditional shared services in Bengaluru, India (from 2004) to a global, purpose-driven solutions-focused organisation. TBS is committed to driving scale at speed and delivering value to the Tesco Group through the power of decision science. With over 4,400 highly skilled colleagues globally, TBS supports markets and business units across four locations in the UK, India, Hungary, and the Republic of Ireland. The organisation underpins everything that the Tesco Group does, bringing innovation, a solutions mindset, and agility to its operations and support functions, building winning partnerships across the business. TBS's focus is on adding value and creating impactful outcomes that shape the future of the business. TBS creates a sustainable competitive advantage for the Tesco Group by becoming the partner of choice for talent, transformation, and value creation Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bengaluru East, Karnataka, India

On-site

Linkedin logo

Are you a full-stack developer with a passion for driving business growth through innovative digital solutions and want to make a significant impact? Our Power & Sensor Systems (PSS) Division is seeking you for our Indian branch of our divisional Digitalization department, responsible for designing and implementing future state business processes enhanced by digital, AI-powered solutions that revolutionize the way how we work. As one of the first members in this new team, you will have the unique opportunity to support recruitment, and onboard, and mentor young talents. Furthermore, you will drive the development of innovative data products in close collaboration with our headquarter, data scientists and corresponding business analysts who will oversee our solution portfolio and manage customer interactions and requirements gathering. You will leverage the latest technologies and trends to create data pipelines that fuel digital solutions enhancing customer experience, improving operational efficiency, and driving sustainable profitable growth. If you're excited about joining this new team and making a significant impact in the digital space, we invite you to apply! Job Description In your new role you will: Application development: design, develop, and maintain full-stack data applications with modern web technologies, ensuring responsive and seamless user experiences. Performance optimization: monitor and optimize application performance, identifying and resolving bottlenecks while implementing best practices in coding and deployment. Security and compliance: implement robust security measures, ensure compliance with data protection regulations, and conduct regular security assessments with respective central functions. Technical leadership: drive technical excellence and innovation within your work and be a role model for younger engineers to ensure best practices in software development and DevOps are followed, while staying updated with the latest industry trends. Project management: drive development and implementation of digitalization use cases, so that they are delivered on time, with high quality and within budget, by effectively managing project timelines and interdependencies with other organizations. Stakeholder collaboration: act as main point of contact for your project counterparts from business and other central departments, collect requirements and communicate project progress, challenges, and achievements to your management. Customer-centric innovation: identify opportunities for technical improvements and new technologies that enhance customer satisfaction, while proactively engaging in suggesting and implementing enhancements that directly address customer needs and add value. With great dedication you commit yourself to your projects and area of responsibility. You are able to inspire others and encourage them for long-lasting top performance. Together with your co-workers you work towards initiatives to their full implementation and application. You focus your efforts on finding solutions that offer added value to our internal customers at Infineon and go the extra mile to ensure user adoption of our innovative tools to harvest their full value. Simultaneously you can quickly establish a successful cooperation between various stakeholders of diverse professional and cultural background on different hierarchical levels. Your Profile You are best equipped for this task if you have: A degree in Information Technology, Computer Science, Engineering or a related field More than 3 years of relevant work experience as a full-stack developer or similar role, with a focus on data applications Proficiency in front-end technologies such as HTML, CSS, JavaScript, and frameworks like React, Angular or comparable Strong knowledge of back-end development using languages such as Python, Node.js, Java, or similar Proficiency with relational and distributed data storages (e.g., Oracle, mySQL, HDFS) and suitable query language(s) (e.g., SQL) Good understanding of data processing frameworks and tools such as Apache Spark, Hadoop or similar Familiarity with RESTful API design and implementation Experience with containerization and orchestration tools like Docker and Kubernetes Proven experience in DevOps including relevant methods and tools (e.g., Git, CI/CD, Jenkins) Experience in project management with the ability to take over responsibility on a work stream level Good interpersonal and communication skills (verbal and written) to build strong relationships with customers and stakeholders at all levels. Analytical, problem-solving mindset with the ability to navigate and resolve challenges in a large, dynamic and complex environment while being sensitive to soft aspects. Passionate for new technologies and entrepreneurial spirit in running technical pilots to shape future solutions Contact: Shavin.Shashidhar@infineon.com #WeAreIn for driving decarbonization and digitalization. As a global leader in semiconductor solutions in power systems and IoT, Infineon enables game-changing solutions for green and efficient energy, clean and safe mobility, as well as smart and secure IoT. Together, we drive innovation and customer success, while caring for our people and empowering them to reach ambitious goals. Be a part of making life easier, safer and greener. Are you in? We are on a journey to create the best Infineon for everyone. This means we embrace diversity and inclusion and welcome everyone for who they are. At Infineon, we offer a working environment characterized by trust, openness, respect and tolerance and are committed to give all applicants and employees equal opportunities. We base our recruiting decisions on the applicant´s experience and skills. Please let your recruiter know if they need to pay special attention to something in order to enable your participation in the interview process. Click here for more information about Diversity & Inclusion at Infineon. Show more Show less

Posted 1 week ago

Apply

7.0 - 9.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Description: About Us At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities, and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being. Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization. Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us! Global Business Services Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations. Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence, and innovation. In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services. Process Overview* Wholesale Risk Analytics (WRA) provides decision support services to enable our banking and markets, wealth management, capital management and credit risk business partners to manage risk well. To accomplish this, we build, develop and maintain models for risk and loss measurement. Additionally, we produce and administer forecasts supporting allowance and stress-testing both domestically (US) and internationally. Our teams complement each other and benefit from broad and diverse employment and experience within industries including government, academia, finance, law, consulting and the military. Through developing our people, embracing operational excellence to drive process improvements, and investing in strategic data initiatives, we provide enhanced risk management capabilities, insights and analysis to our business partners. Job Description* Wholesale Risk Analytics (WRA) is seeking an eager candidate ready to delve into the world of transformative model implementation and technology infrastructure. The hired candidate will have the opportunity to explore software development and data infrastructure on the strategic GRA Core platform (GCP). The hired candidate will have the opportunity to collaborate with GRA and Technology teams, leverage software engineering and model development skills, and gain expertise on how meet business and regulatory requirements in a fast-paced environment. This role requires a strong technical acumen, encompassing data analysis, computer programming, organization, good verbal and written communication, and software design capabilities. You will be at the forefront of Python and PySpark software development, and interface with Hadoop, HIVE and Exadata for model data input and output integration. Responsibilities* The role involves adhering to software standards and controls established by the model development team for model implementation, along with developing model execution code and operational functionalities that support models in production, including GRA Core Platform (GCP) workflows. In addition, the role will entail Python implementation of new and existing models, analytic processes, or system approaches. This role will also be instrumental in constructing a strategic model infrastructure using Python and PySpark, preparing code documentation, and collaborating with technology teams to deploy code in production. The role requires strong quantitative and computer programming skills (specifically in Python), and the capability of understanding and contributing to software design and GRA code control standards. Requirements* Education* BE / MCA Certifications If Any: NA Experience Range* 7 to 9 Years Foundational Skills* 3 years of hands-on experience Python and PySpark programming 2 years of programming experience with databases such as Oracle, Hadoop, and/or HIVE 2 years of JIRA and Git experience for software development Proficiency in writing functional specifications for software implementation Exceptional analytical and problem-solving skills Basic Software engineering and design skills with a commitment to bust software development practices A strategic thinker with aptitude to leverage technical skills to solve business problems Experience in designing, developing, and implementing scalable infrastructure solutions Desired Skills* Capable of documenting processes, inputs, outputs, and requirements, along with identify gaps and improve workflow. Work Timings* 11:30 AM - 8:30 PM Job Location* Hyderabad Show more Show less

Posted 1 week ago

Apply

15.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Come work at a place where innovation and teamwork come together to support the most exciting missions in the world! We are seeking a talented Senior Manager, Engineering to help build Enterprise TruRisk Platform from ground-up which would help customers to Measure, Communicate and Eliminate Cyber Risks. Working with a team of engineers and architects, you will be responsible for prototyping, designing, developing and supporting a highly scalable, distributed SaaS based Security Risk Prioritization product. This is a great opportunity to be an integral part of a team building Qualys next generation Micro-Services based technology platform processing along with Big Data technologies over billions of transactions data per day, leverage open-source technologies, and work on challenging and business-impacting projects. Responsibilities: You will be Designing and developing Security Product in the cloud from ground-up. Produce high quality software following good architecture and design principles that you and your team will find easy to work with in the future. You will be working towards the framework towards data ingestion, normalization enrichment and risk evaluation pipeline. Working with big data technologies like KAFKA, Spark, Hadoop, Elastic and Micro Services Working & managing teams and engineers to achieve certain needs. Working with multiple teams to have cross function dependencies resolution. Researching and implementing code design, adoption of new technologies and skills. You'll create high-performance Restful APIs to be consumed by external partners. You will be building highly scalable services that interacts with Qualys Enterprise TruRisk Platform. Qualifications: Bachelors/Masters/Doctorate in Computer Science or equivalent 15+ years of JAVA development experience with Microservices architecture. 5+ years of experience with Hadoop and Spark. 5+ years of experience in Kafka 3+ years of experience in Microservices. Prior experience in creating scalable data ingestion pipeline. 4+ years of experience in people management Experience in Streaming processing Experience is how to get data in parsed format mapped to common data model Strong logical skills for code design and implementation. Writing high-performance, reliable and maintainable code. Experience in designing, developing and delivering scalable solutions. Good knowledge of SQL, advanced data structures, design patterns, object-oriented principles. Experience with API, SDKs and third-party integration. Should be well versed with Java 7 / 8 and Scala. Solid understanding of RDBMS preferably Oracle. Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

India

On-site

Linkedin logo

It takes powerful technology to connect our brands and partners with an audience of hundreds of millions of people. Whether you’re looking to write mobile app code, engineer the servers behind our massive ad tech stacks, or develop algorithms to help us process trillions of data points a day, what you do here will have a huge impact on our business—and the world. Job Location: Hyderabad (Hybrid Work Model) The Data and Common Services (DCS) team within the Yahoo Advertising Engineering organization is responsible for the Advertising core data infrastructure and services that provide common, horizontal services for user and contextual targeting, privacy and analytics. We are looking for a talented junior or mid level engineer who can design, implement, and support robust, scalable and high quality solutions related to Advertising Targeting, Identity, Location and Trust & Verification. As a member of the team, you will be helping our Ad platforms to deliver highly accurate and relevant Advertising experience for our consumers and for the web at large. Job Description Design and code backend Java applications and services. Emphasis is placed on implementing maintainable, scalable, systems capable of handling billions of requests per day. Analyze business and technical requirements and design solutions that meet those needs. Collaborate with project managers to develop and clarify requirements Work with Operations Engineers to ensure applications are operations ready and able to be effectively monitored using automated methods Troubleshoot production issues related to the team’s applications. Effectively manage day-to-day tasks to meet scheduled commitments. Be able to work independently. Collaborate with programmers both on their team and on other teams Skills and Education B.Tech/BE in Computer Science or equivalent technical discipline 8+ years of experience designing and programming in a Unix/Linux environment Excellent written and verbal communication skills, e.g., the ability to explain the work in plain language Experience delivering innovative, customer-centric products at high scale Technical with a track record of successful delivery as individual contributor Experience with building robust, scalable, distributed services Execution experience in fast-paced environments and performance driven culture Experience with big data technologies, such as Spark, Hadoop, and Airflow Knowledge of CI/CD and DevOps tools and processes Strong programming skills in Java, Python, or Scala Solid understanding of RDBMS and general database concepts Must have extensive technical knowledge and experience with distributed systems Must have strong programming, testing, and troubleshooting skills. Experience in public cloud such as AWS. Important notes for your attention Applications: All applicants must apply for Yahoo openings direct with Yahoo. We do not authorize any external agencies in India to handle candidates’ applications. No agency nor individual may charge candidates for any efforts they make on an applicant’s behalf in the hiring process. Our internal recruiters will reach out to you directly to discuss the next steps if we determine that the role is a good fit for you. Selected candidates will go through formal interviews and assessments arranged by Yahoo direct. Offer Distributions: Our electronic offer letter and documents will be issued through our system for e-signatures, not via individual emails. Yahoo is proud to be an equal opportunity workplace. All qualified applicants will receive consideration for employment without regard to, and will not be discriminated against based on age, race, gender, color, religion, national origin, sexual orientation, gender identity, veteran status, disability or any other protected category. Yahoo will consider for employment qualified applicants with criminal histories in a manner consistent with applicable law. Yahoo is dedicated to providing an accessible environment for all candidates during the application process and for employees during their employment. If you need accessibility assistance and/or a reasonable accommodation due to a disability, please submit a request via the Accommodation Request Form (www.yahooinc.com/careers/contact-us.html) or call +1.866.772.3182. Requests and calls received for non-disability related issues, such as following up on an application, will not receive a response. Yahoo has a high degree of flexibility around employee location and hybrid working. In fact, our flexible-hybrid approach to work is one of the things our employees rave about. Most roles don’t require specific regular patterns of in-person office attendance. If you join Yahoo, you may be asked to attend (or travel to attend) on-site work sessions, team-building, or other in-person events. When these occur, you’ll be given notice to make arrangements. If you’re curious about how this factors into this role, please discuss with the recruiter. Currently work for Yahoo? Please apply on our internal career site. Show more Show less

Posted 1 week ago

Apply

5.0 - 8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Title: Senior Data Engineer/Developer Number of Positions: 2 Job Description The Senior Data Engineer will be responsible for designing, developing, and maintaining scalable data pipelines and building out new API integrations to support continuing increases in data volume and complexity. They will collaborate with analytics and business teams to improve data models that feed business intelligence tools, increasing data accessibility and fostering data-driven decision making across the organization. Responsibilities Design, construct, install, test and maintain highly scalable data management systems & Data Pipeline. Ensure systems meet business requirements and industry practices. Build high-performance algorithms, prototypes, predictive models, and proof of concepts. Research opportunities for data acquisition and new uses for existing data. Develop data set processes for data modeling, mining and production. Integrate new data management technologies and software engineering tools into existing structures. Create custom software components and analytics applications. Install and update disaster recovery procedures. Collaborate with data architects, modelers, and IT team members on project goals. Provide senior level technical consulting to peer data engineers during data application design and development for highly complex and critical data projects. Qualifications Bachelor's degree in computer science, Engineering, or related field, or equivalent work experience. Proven 5-8 years of experience as a Senior Data Engineer or similar role. Experience with big data tools: Hadoop, Spark, Kafka, Ansible, chef, Terraform, Airflow, and Protobuf RPC etc. Expert level SQL skills for data manipulation (DML) and validation (DB2). Experience with data pipeline and workflow management tools. Experience with object-oriented/object function scripting languages: Python, Java, Go lang etc. Strong problem solving and analytical skills. Excellent verbal communication skills. Good interpersonal skills. Ability to provide technical leadership for the team. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Job Description Pfizer’s Chief Digital Office (CDO) leads the transformation of Pfizer into a digital powerhouse that will generate patient superior experiences which results in better health outcomes. The Analytics Experience team, which is part of the Artificial Intelligence, Data and Advanced Analytics (AIDA) organization, is responsible for the creating a seamless experience for analytics experts to harness the potential of big data, machine learning, and interactive analytics through a unified platform across the enterprise – from scientific/clinical to commercial across all Pfizer geographies. Part of the Analytics Experience team are a group of Platform Engineers who are responsible for managing components that form the enterprise analytics platform. The enterprise analytics platform will be the digital engine that brings together investments we have made into a unified experience for colleagues that take us to the next level of value creation. It powers next generation insights by developing enterprise-grade data foundations, allowing data to flow horizontally, enabling a versatile analytics environment, and embedding insights into day-to-day work to create a digital, data-driven culture. In this role, you will act as an experienced engineer for the Data Science platform that serves a diverse group of colleagues – ranging from novice business analysts to expert data scientists – across the enterprise. The role will require the ability to flex between technical engineering of the platform and hands on working knowledge creating data science applications as a user of the platform. You may have a software development background, or an engineering background or you were raised in a modern data analytics culture from the start of your career. You will have the ability to work with technology vendors, support resources, delivery partners, Pfizer stakeholders, and apply emerging and traditional technologies for analytics improvements in support of enterprise analytics platform strategy. Role Responsibilities Support the operations of multiple Dataiku DSS environments globally – and supporting self-service tenants Support the automation of operation, installation, and monitoring of the data science ecosystem components in our infrastructure stack Resolve critical support issues with a high degree of technical complexity. Document knowledge in the form of incident notes, technical articles, and contributions to knowledge bases or forums within specific areas of expertise. Measure and monitor service and support performance and health (SLAs, KPIs, etc.) Perform maintenance activities to ensure high availability Partner with User Success team to maintain key informational content, drive platform adoption, and champion a strong user community Collaborate with engineering and support teams on platform lifecycle management Support change governance and continuous improvement efforts Collaborate with vendors and internal engineering teams on evaluating new technologies, supportability Develop and maintain operational run-books and training curricula for support teams Directly engage and communicate with key business and technical stakeholders Basic Qualifications Bachelor's degree in Computer Science, Statistics, or related field 5+ years of relevant professional experience Experience with engineering or technical role experience, ideally involving a complex and rapidly evolving software/product Comfort working with and reading code. Experience working with at least one type of relational database and SQL. Some experience with big data technologies, such as Hadoop, Spark, or Kubernetes. Some experience with cloud platforms such as AWS, Azure, and GCP. Some experience with python. Grit when faced with technical issues - you don't rest until you understand what is happening and why things are not working. Excellent problem solving and analytical skills with an aptitude for learning new technologies. Strong communication skills and the ability to interface with both technical and non-technical individuals as needed. Hands-on experience and proficiency with DevOps and information tools – e.g., JIRA, Confluence, SharePoint, Yammer, etc. Excellent verbal and written communication skills Preferred Qualifications Familiarity with Ansible and other application deployment tools. Some experience with authentication and authorization systems like LDAP, Kerberos, AD, and IAM Some experience debugging networking issues such as DNS resolutions, proxy settings, security groups Some knowledge of data science / machine learning Some knowledge of Java Pfizer is an equal opportunity employer and complies with all applicable equal employment opportunity legislation in each jurisdiction in which it operates. Information & Business Tech Show more Show less

Posted 1 week ago

Apply

7.0 years

0 Lacs

Greater Hyderabad Area

On-site

Linkedin logo

Overview As a Senior Software Engineer on the AI Engineering Team at Cotiviti, you will be a leading force in developing robust, scalable machine learning solutions for healthcare applications. This senior-level position involves significant responsibility, including leading design and development efforts, mentoring junior engineers, and ensuring the delivery of high-quality solutions. Basic Qualifications Bachelor’s degree in Computer Science, Engineering, Math, or a related field, or equivalent experience 7+ years of experience with Hadoop tech stack (Spark, Kafka) Should have experience with batch processing on large scale data with Spark and real-time without Spark Proficiency in programming languages such as Scala or Python Extensive experience with Kafka and data streaming platforms Advanced knowledge of Data Bricks on AWS or similar cloud platforms Proven experience building and maintaining microservices Deep understanding of data architecture principles Experience leading design and development of large systems Proficiency with CI/CD tools like Jenkins Experience with Unix/Linux operating systems Familiarity with Agile processes and tools like Jira and Confluence Strong drive to learn and advocate for development best practices Strong knowledge on troubleshooting and optimizing Spark applications Preferred Qualifications Experience with Data Bricks on Azure/AWS Experience with Kafka, DataStream/DataFrame/DataSet Advanced proficiency with containerization tools like Docker, Kubernetes Knowledge of machine learning frameworks and tools such as DataRobot, H2O, ML Flow Experience with big data tools like Spark, Scala, Oozie, Hive or similar Streaming technologies Kafka, SparkStreams, RabbitMQ Experience with Continuous Integration and Delivery, unit testing, and functional automation testing Having API development experience will be a good addition. Healthcare domain experience will be a plus. Responsibilities Lead the development and implementation of machine learning solutions for healthcare applications Guide and mentor a team of developers and testers Collaborate with data scientists and other engineers to design and build scalable solutions Write, test, and maintain high-quality code along with Code coverage Lead design and code review sessions Troubleshoot and resolve complex technical issues Document your work and share knowledge with the team Advocate for and implement development best practices Train and mentor junior engineers and software engineers Who You Are Curious: You are always looking to deepen your understanding of complex problems. Creative: You enjoy coming up with innovative solutions to difficult challenges. Practical: You focus on delivering solutions that have real-world applications and value. Focused: You maintain a clear vision of your goals and work diligently to achieve them. Determined: You are committed to contributing to the development of advanced machine learning capabilities. Show more Show less

Posted 1 week ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build what’s next for their businesses. Your Role Use Design thinking and a consultative approach to conceive cutting edge technology solutions for business problems, mining core Insights as a service model Engage with project activities across the Information lifecycle. Understanding client requirements, develop data analytics strategy and solution that meets client requirements Apply knowledge and explain the benefits to organizations adopting strategies relating to NextGen/ New age Data Capabilities Be proficient in evaluating new technologies and identifying practical business cases to develop enhanced business value and increase operating efficiency Architect large scale AI/ML products/systems impacting large scale clients across industry Own end to end solutioning and delivery of data analytics/transformation programs Mentor and inspire a team of data scientists and engineers solving AI/ML problems through R&D while pushing the state-of-the-art solution Liaise with colleagues and business leaders across Domestic & Global Regions to deliver impactful analytics projects and drive innovation at scale Assist sales team in reviewing RFPs, Tender documents, and customer requirements Developing high-quality and impactful demonstrations, proof of concept pitches, solution documents, presentations, and other pre-sales assets Have in-depth business knowledge across a breath of functional areas across sectors such as CPRD/FS/MALS/Utilities/TMT Your Profile B.E. / B.Tech. + MBA (Systems / Data / Data Science/ Analytics / Finance) with a good academic background Minimum 10 years + on Job experience in data analytics with at least 7 years of CPRD, FS, MALS, Utilities, TMT or other relevant domain experience required Specialization in data science, data engineering or advance analytics filed is strongly recommended Excellent understanding and hand-on experience of data-science and machine learning techniques & algorithms for supervised & unsupervised problems, NLP and computer vision Good, applied statistics skills, such as distributions, statistical inference & testing, etc. Excellent understanding and hand-on experience on building Deep-learning models for text & image analytics (such as ANNs, CNNs, LSTM, Transfer Learning, Encoder and decoder, etc). Proficient in coding in common data science language & tools such as R, Python, Go, SAS, Matlab etc. At least 7 years’ experience deploying digital and data science solutions on large scale project is required At least 7 years’ experience leading / managing a data Science team is required Exposure or knowledge in cloud (AWS/GCP/Azure) and big data technologies such as Hadoop, Hive What You Will Love About Working Here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of €22.5 billion. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build what’s next for their businesses. Your Role Use Design thinking and a consultative approach to conceive cutting edge technology solutions for business problems, mining core Insights as a service model Contribute to the capacity of a Scrum Master/data business analyst to Data project Act as an agile coach, implementing and supporting agile principles, practices and rules of the scrum process and other rules that the team has agreed upon Knowledge of different framework like Scrum, Kanban, XP etc. Drive tactical, consistent team-level improvement as part of the scrum. Work closely with Product owner to get the priority by business value, work is aligned with objectives and drive the health of product backlog. Facilitate the scrum ceremonies and analyze the sprint report, burn down charts to identify the areas of improvement Support and coordinate system implementations through the project lifecycle working with other teams on a local and global basis. Engage with project activities across the Information lifecycle, often related to paradigms like -Building & managing Business data lakes and ingesting data streams to prepare data, Developing machine learning and predictive models to analyse data, Visualizing data, Specialize in Business Models and architectures across various Industry verticals. Your Profile Proven working experience as a Scrum Master/Data Business Analyst with overall experience of min 5 to 9+ years Preferably, person should have domain knowledge on CPRD/ FS/ MALS/ Utilities/ TMT Independently able to work with Product Management team and prepare Functional Analysis, User Stories Experience in technical writing skills to create BRD , FSD, Non-functional Requirement document, User Manual &Use Cases Specifications Comprehensive & solid experience of SCRUM as well as SDLC methodologies Experience in Azure DevOps/JIRA/Confluence or equivalent system and possess strong knowledge of the other agile frameworks Excellent meeting moderation and facilitation skills CSM/PSM 1 or 2 certified is mandate; SAFe 5.0 /6.0 is a plus Strong stakeholder management skills Good to have knowledge about big data ecosystem like Hadoop. Good to have working knowledge of R/Python language What You Will Love Working About Here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of €22.5 billion. Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bengaluru East, Karnataka, India

On-site

Linkedin logo

Visa is a world leader in payments and technology, with over 259 billion payments transactions flowing safely between consumers, merchants, financial institutions, and government entities in more than 200 countries and territories each year. Our mission is to connect the world through the most innovative, convenient, reliable, and secure payments network, enabling individuals, businesses, and economies to thrive while driven by a common purpose – to uplift everyone, everywhere by being the best way to pay and be paid. Make an impact with a purpose-driven industry leader. Join us today and experience Life at Visa. Job Description We are seeking a Data Engineer with a strong background in data engineering. This role involves managing system requirements, design, development, integration, quality assurance, implementation, and maintenance of corporate applications. Ø Work with product owners, business stakeholders and internal teams to understand business requirements and desired business outcomes. Ø Assist in scoping and designing analytic data assets, implementing modelled attributes and contributing to brainstorming sessions. Ø Build and maintain a robust data engineering process to develop and implement self-serve data and tools for Visa’s product management teams and data scientists. Ø Find opportunities to create, automate and scale repeatable analyses or build self-service tools for business users. Ø Execute data engineering projects ranging from small to large either individually or as part of a project team. Ø Set the benchmark in the team for good data engineering practices and assist leads and architects in solution design. Ø Exhibit a passion for optimizing existing solutions and making incremental improvements. This is a hybrid position. Expectation of days in office will be confirmed by your hiring manager. Qualifications Basic Qualification -Bachelors degree, OR 3+ years of relevant work experience Preferred Qualification -Minimum of 1 years’ experience in building data engineering pipelines. - Design and coding skills with Big Data technologies like Hadoop, Spark, Hive and Map reduce. -Mastery in Pyspark or Scala. - Expertise in any programming like Java or Python. Knowing OOP concepts like inheritance, polymorphism and implementing Design Patterns in programming is needed. -Experience with cloud platforms like AWS, GCP, or Azure is good to have. -Excellent problem-solving skills and ability to think critically. -Experience with any one ETL tool like Informatica, SSIS, Pentaho or Azure Data Factory. -Knowledge of successful design, and development of data driven real time and batch systems. -Experience in data warehousing and an expert in any one of the RDBMS like SQL Server, Oracle, etc. -Nice to have reporting skills on PowerBI/Tableau/QlikView. -Strong understanding of cloud architecture and service offerings including compute, storage, databases, networking, AI, and ML. -Passionate about delivering zero defect code that meets or exceeds the proposed defect SLA and have high sense of accountability for quality and timelines on deliverables. -Experience developing as part of Agile/Scrum team is preferred and hands on with Jira. -Understanding basic CI/ CD functionality and Git concepts is must. Additional Information Visa is an EEO Employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability or protected veteran status. Visa will also consider for employment qualified applicants with criminal histories in a manner consistent with EEOC guidelines and applicable local law. Show more Show less

Posted 1 week ago

Apply

Exploring Hadoop Jobs in India

The demand for Hadoop professionals in India has been on the rise in recent years, with many companies leveraging big data technologies to drive business decisions. As a job seeker exploring opportunities in the Hadoop field, it is important to understand the job market, salary expectations, career progression, related skills, and common interview questions.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Pune
  4. Hyderabad
  5. Chennai

These cities are known for their thriving IT industry and have a high demand for Hadoop professionals.

Average Salary Range

The average salary range for Hadoop professionals in India varies based on experience levels. Entry-level Hadoop developers can expect to earn between INR 4-6 lakhs per annum, while experienced professionals with specialized skills can earn upwards of INR 15 lakhs per annum.

Career Path

In the Hadoop field, a typical career path may include roles such as Junior Developer, Senior Developer, Tech Lead, and eventually progressing to roles like Data Architect or Big Data Engineer.

Related Skills

In addition to Hadoop expertise, professionals in this field are often expected to have knowledge of related technologies such as Apache Spark, HBase, Hive, and Pig. Strong programming skills in languages like Java, Python, or Scala are also beneficial.

Interview Questions

  • What is Hadoop and how does it work? (basic)
  • Explain the difference between HDFS and MapReduce. (medium)
  • How do you handle data skew in Hadoop? (medium)
  • What is YARN in Hadoop? (basic)
  • Describe the concept of NameNode and DataNode in HDFS. (medium)
  • What are the different types of join operations in Hive? (medium)
  • Explain the role of the ResourceManager in YARN. (medium)
  • What is the significance of the shuffle phase in MapReduce? (medium)
  • How does speculative execution work in Hadoop? (advanced)
  • What is the purpose of the Secondary NameNode in HDFS? (medium)
  • How do you optimize a MapReduce job in Hadoop? (medium)
  • Explain the concept of data locality in Hadoop. (basic)
  • What are the differences between Hadoop 1 and Hadoop 2? (medium)
  • How do you troubleshoot performance issues in a Hadoop cluster? (advanced)
  • Describe the advantages of using HBase over traditional RDBMS. (medium)
  • What is the role of the JobTracker in Hadoop? (medium)
  • How do you handle unstructured data in Hadoop? (medium)
  • Explain the concept of partitioning in Hive. (medium)
  • What is Apache ZooKeeper and how is it used in Hadoop? (advanced)
  • Describe the process of data serialization and deserialization in Hadoop. (medium)
  • How do you secure a Hadoop cluster? (advanced)
  • What is the CAP theorem and how does it relate to distributed systems like Hadoop? (advanced)
  • How do you monitor the health of a Hadoop cluster? (medium)
  • Explain the differences between Hadoop and traditional relational databases. (medium)
  • How do you handle data ingestion in Hadoop? (medium)

Closing Remark

As you navigate the Hadoop job market in India, remember to stay updated on the latest trends and technologies in the field. By honing your skills and preparing diligently for interviews, you can position yourself as a strong candidate for lucrative opportunities in the big data industry. Good luck on your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies