Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 8.0 years
0 Lacs
karnataka
On-site
As an Assistant Manager - Analytics, you will play a crucial role in driving data-driven projects, designing complex data solutions, and providing valuable insights to stakeholders to contribute to the growth of our Ads product and business metrics. Your responsibilities will involve gaining deep insights into the Ads core product, leading large-scale experimentation on Adtech innovation, and forecasting demand-supply to drive growth in our Ads product and the complex Ads Entertainment business. You will be part of the Central Analytics team, which is integrated within various business and product teams in a matrix structure to provide comprehensive data insights that drive strategic decisions. This team acts as a strategic enabler for JioHotstar's Ads business and product functions by analyzing consumer experience, consumer supply, advertisers demand, and Ad serving capabilities to achieve goals and KPIs across Ads product, Advertisers objectives, and Entertainment business planning. The team focuses on leveraging experiments, applying GenAI for innovative problem-solving, and building analytical frameworks to guide key decisions and keep teams informed and focused. Reporting to the Manager - Product Analytics, your key responsibilities will include applying analytics knowledge and skills to problem-solving, generating quality data insights through reports, dashboards, and structured documentation, developing a deep understanding of the data platform and technology stack, utilizing statistical techniques to validate findings, effectively communicating complex data concepts, partnering with stakeholders to identify opportunities, managing projects end-to-end, and contributing data-driven insights in experiments to foster a culture of innovation and collaboration. To excel in this role, you should demonstrate expertise in predictive analysis with proficiency in R, SQL, Python, and Pyspark, familiarity with big data platforms and tools like Hadoop, Spark, and Hive, experience in dashboard building and data visualization using tools like Tableau and Power BI, advanced technical skills in collecting and disseminating information accurately, knowledge of digital analytics and clickstream data, strong communication skills for presenting insights clearly, a passion for the entertainment industry, and experience in Adtech and OTT platforms. The ideal candidate will have a Bachelor's or Master's degree in Engineering, Mathematics, Operational Research, Statistics, Physics, or a related technical discipline, along with 4-6 years of experience in Business/Product Analytics, preferably from consumer technology companies. Join us at JioStar, a global media & entertainment company that is revolutionizing the entertainment consumption experience for millions of viewers worldwide. We are committed to diversity and creating an inclusive workplace where everyone can thrive and contribute their unique perspectives.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
maharashtra
On-site
The Microsoft Cloud Data Engineer role is a great opportunity for a talented and motivated individual to design, construct, and manage cloud-based data solutions using Microsoft Azure technologies. Your primary responsibility will be to create strong, scalable, and secure data pipelines and support analytics workloads that drive business insights and data-based decision-making. You will design and deploy ETL/ELT pipelines using Azure Data Factory, Azure Synapse Analytics, Azure Databricks, and Azure Data Lake Storage. Additionally, you will be responsible for developing and overseeing data integration workflows to bring in data from various sources such as APIs, on-prem systems, and cloud services. It will also be important to optimize and maintain SQL-based data models, views, and stored procedures in Azure SQL, SQL MI, or Synapse SQL Pools. Collaboration with analysts, data scientists, and business teams will be crucial to gather data requirements and provide reliable and high-quality datasets. You will need to ensure data quality, governance, and security by implementing robust validation, monitoring, and encryption mechanisms. Supporting infrastructure automation using Azure DevOps, ARM templates, or Terraform for resource provisioning and deployment will also be part of your responsibilities. You will also play a role in troubleshooting, performance tuning, and the continuous improvement of the data platform. To qualify for this position, you should have a Bachelor's degree in Computer Science, Engineering, Information Systems, or a related field. A minimum of 3 years of experience in data engineering with a focus on Microsoft Azure data services is required. Hands-on experience with Azure Data Factory, Azure Synapse Analytics, and Azure Data Lake is a must. Strong proficiency in SQL and data modeling is essential, along with experience in Python, PySpark, or .NET for data processing. Understanding of data warehousing, data lakes, and ETL/ELT best practices is important, as well as familiarity with DevOps tools and practices in an Azure environment. Knowledge of Power BI or similar visualization tools is also beneficial. Additionally, holding the Microsoft Certified: Azure Data Engineer Associate certification or its equivalent is preferred.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
haryana
On-site
You will be working as a Technical Lead Data Engineer for a leading data and AI/ML solutions provider based in Gurgaon. In this role, you will be responsible for designing, developing, and leading complex data projects primarily on Google Cloud Platform and other modern data stacks. Your key responsibilities will include leading the design and implementation of robust data pipelines, collaborating with cross-functional teams to deliver end-to-end data solutions, owning project modules, developing technical roadmaps, and implementing data governance frameworks on GCP. You will be required to integrate GCP data services like BigQuery, Dataflow, Dataproc, Cloud Composer, Vertex AI Studio, and GenAI with platforms such as Snowflake. Additionally, you will write efficient code in Python, SQL, and ETL/orchestration tools, utilize containerized solutions for scalable deployments, and apply expertise in PySpark, Kafka, and advanced data querying for high-volume data environments. Monitoring, optimizing, and troubleshooting system performance, reducing job run-times through architecture optimization, developing data warehouses, and mentoring team members will also be part of your role. To be successful in this position, you should have a Bachelors or Masters degree in Computer Science, Engineering, or a related field. Extensive hands-on experience with Google Cloud Platform data services, Snowflake integration, strong programming skills in Python and SQL, proficiency in PySpark, Kafka, and data querying tools, and experience with containerized solutions using Google Kubernetes Engine are essential. Strong communication skills, documentation skills, experience with large distributed datasets, and the ability to balance short-term deliverables with long-term technical sustainability are also required. Prior leadership experience in data engineering teams and exposure to cloud data platforms are desirable. This role offers you the opportunity to lead high-impact data projects for reputed clients in a fast-growing data consulting environment, work with cutting-edge technologies, and collaborate in an innovative and growth-oriented culture.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
maharashtra
On-site
You are an experienced Databricks on AWS and PySpark Engineer being sought to join our team. Your role will involve designing, building, and maintaining large-scale data pipelines and architectures using Databricks on AWS and optimizing data processing workflows with PySpark. Collaboration with data scientists and analysts to develop data models and ensure data quality, security, and compliance with industry standards will also be a key responsibility. Your main tasks will include troubleshooting data pipeline issues, optimizing performance, and staying updated on industry trends and emerging data engineering technologies. You should have at least 3 years of experience in data engineering with a focus on Databricks on AWS and PySpark, possess strong expertise in PySpark and Databricks for data processing, modeling, and warehousing, and have hands-on experience with AWS services like S3, Glue, and IAM. Your proficiency in data engineering principles, data governance, and data security is essential, along with experience in managing data processing workflows and data pipelines. Strong problem-solving skills, attention to detail, effective communication, and collaboration abilities are key soft skills required for this role, as well as the capability to work in a fast-paced and dynamic environment while adapting to changing requirements and priorities.,
Posted 1 week ago
10.0 - 15.0 years
0 Lacs
karnataka
On-site
As the Senior Director of Marketing and Customer Analytics at Labcorp, you will play a pivotal role in shaping the company's marketing strategies and customer engagement initiatives using data-driven insights. Your responsibilities will include defining and executing Labcorp's long-term marketing analytics strategy, delivering actionable insights to enhance campaign planning and audience segmentation, and championing the use of AI tools for personalized customer engagement. Additionally, you will lead a high-performing team, collaborate with internal stakeholders, and support the development of audience segmentation and marketing measurement initiatives. You will be responsible for building and leading a team of marketing analysts and data scientists to drive global programs, while also creating a supportive and inspiring team culture. Your role will involve developing marketing analytics strategies, establishing key metrics for measuring marketing programs, and collaborating with various stakeholders to leverage data for informed decision-making. You will also lead efforts in campaign analytics, marketing performance, and team development, ensuring that Labcorp maximizes data utilization and drives business growth effectively. To succeed in this role, you should have at least 15 years of experience in analytics, with a proven track record of shaping and executing marketing data strategies in Fortune 500 settings. Strong leadership skills, expertise in advanced analytics and predictive modeling, and the ability to communicate complex analytical findings in a compelling manner are essential for this position. Additionally, experience with data architecture principles, data visualization tools, and proficiency in tools such as Python, R, SQL, Tableau, and Power BI will be beneficial. The ideal candidate for this role is a self-starter with a proactive mindset, capable of working independently as well as collaboratively in a team environment. You should possess strong project management skills, excellent interpersonal communication abilities, and a results-oriented approach to driving data-driven decisions. Prior experience in medical diagnostics, pharmaceutical R&D, or healthcare sectors, as well as a strong understanding of Martech data sources, will be advantageous. Labcorp is an Equal Opportunity Employer and encourages individuals from all backgrounds to apply for this position. If you require assistance due to a disability or need accommodation during the application process, please visit Labcorp Accessibility for support. Your privacy is important to us, and we adhere to strict guidelines for collecting and storing personal data.,
Posted 1 week ago
10.0 years
0 Lacs
Gurgaon, Haryana, India
Remote
See all the jobs at Srijan Technologies PVT LTD here: Technical Architect (AWS Glue) Location: Gurugram, Haryana, India | Employment Type: Full-time | Work Arrangement: Partially remote Apply by: No close date Position: Technical Architect (AWS Glue) Design, build, and maintain scalable and efficient data pipelines to move data between cloud-native databases (e.g., Snowflake) and SaaS providers using AWS Glue and Python Implement and manage ETL/ELT processes to ensure seamless data integration and transformation Ensure information security and compliance with data governance standards Maintain and enhance data environments, including data lakes, warehouses, and distributed processing systems Utilize version control systems (e.g., GitHub) to manage code and collaborate effectively with the team Primary Skills: Enhancements, new development, defect resolution, and production support of ETL development using AWS native services Integration of data sets using AWS services such as Glue and Lambda functions Utilization of AWS SNS to send emails and alerts Authoring ETL processes using Python and PySpark ETL process monitoring using CloudWatch events Connecting with different data sources like S3 and validating data using Athena Experience in CI/CD using GitHub Actions Proficiency in Agile methodology Extensive working experience with Advanced SQL and a complex understanding of SQL Secondary Skills: Experience working with Snowflake and understanding of Snowflake architecture, including concepts like internal and external tables, stages, and masking policies Competencies / Experience: Deep technical skills in AWS Glue (Crawler, Data Catalog): 10 years Hands-on experience with Python and PySpark: 5 years PL/SQL experience: 5 years CloudFormation and Terraform: 5 years CI/CD GitHub actions: 5 years Experience with BI systems (PowerBI, Tableau): 5 years Good understanding of AWS services like S3, SNS, Secret Manager, Athena, and Lambda: 5 years Additionally, familiarity with any of the following is highly desirable: Jira, Git About Us We turn customer challenges into growth opportunities. Material is a global strategy partner to the worlds most recognizable brands and innovative companies. Our people around the globe thrive by helping organizations design and deliver rewarding customer experiences. We use deep human insights, design innovation, and data to create experiences powered by modern technology. Our approaches speed engagement and growth for the companies we work with and transform relationships between businesses and the people they serve. Srijan, a Material company, is a renowned global digital engineering firm with a reputation for solving complex technology problems using their deep technology expertise and leveraging strategic partnerships with top-tier technology partners. Why Work for Material In addition to fulfilling, high-impact work, company culture and benefits are integral to determining if a job is the right fit for you. Heres a bit about who we are and highlights around what we offer. Who We Are & What We Care About Material is a global company and we work with best-of-class brands worldwide. We also create and launch new brands and products, putting innovation and value creation at the center of our practice. Our clients are in the top of their class, across industry sectors from technology to retail, transportation, finance, and healthcare. Material employees join a peer group of exceptionally talented colleagues across the company, the country, and even the world. We develop capabilities, craft leading-edge market offerings across seven global practices including strategy and insights, design, data & analytics, technology, and tracking. Our engagement management team makes it all hum for clients. We prize inclusion and interconnectedness. We amplify our impact through the people, perspectives, and expertise we engage in our work. Our commitment to deep human understanding combined with a science & systems approach uniquely equips us to bring a rich frame of reference to our work. A community focused on learning and making an impact. Material is an outcomes-focused company. We create experiences that matter, create new value, and make a difference in people's lives. What We Offer Professional Development and Mentorship Hybrid work mode with remote-friendly workplace (6 times in a row Great Place To Work Certified) Health and Family Insurance 40 Leaves per year along with maternity & paternity leaves Wellness, meditation, and counseling sessions
Posted 1 week ago
12.0 - 20.0 years
35 - 40 Lacs
Mumbai
Work from Office
Job Title: Big Data Developer Project Support & Mentorship Location: Mumbai Position Overview: We are seeking a skilled Big Data Developer to join our growing delivery team, with a dual focus on hands-on project support and mentoring junior engineers. This role is ideal for a developer who not only thrives in a technical, fast-paced environment but is also passionate about coaching and developing the next generation of talent. You will work on live client projects, provide technical support, contribute to solution delivery, and serve as a go-to technical mentor for less experienced team members. Key Responsibilities: Perform hands-on Big Data development work, including coding, testing, troubleshooting, and deploying solutions. Support ongoing client projects, addressing technical challenges and ensuring smooth delivery. Collaborate with junior engineers to guide them on coding standards, best practices, debugging, and project execution. Review code and provide feedback to junior engineers to maintain high quality and scalable solutions. Assist in designing and implementing solutions using Hadoop, Spark, Hive, HDFS, and Kafka. Lead by example in object-oriented development, particularly using Scala and Java. Translate complex requirements into clear, actionable technical tasks for the team. Contribute to the development of ETL processes for integrating data from various sources. Document technical approaches, best practices, and workflows for knowledge sharing within the team. Required Skills and Qualifications: 8+ years of professional experience in Big Data development and engineering. Strong hands-on expertise with Hadoop, Hive, HDFS, Apache Spark, and Kafka. Solid object-oriented development experience with Scala and Java. Strong SQL skills with experience working with large data sets. Practical experience designing, installing, configuring, and supporting Big Data clusters. Deep understanding of ETL processes and data integration strategies. Proven experience mentoring or supporting junior engineers in a team setting. Strong problem-solving, troubleshooting, and analytical skills. Excellent communication and interpersonal skills. Preferred Qualifications: Professional certifications in Big Data technologies (Cloudera, Databricks, AWS Big Data Specialty, etc.). Experience with cloud Big Data platforms (AWS EMR, Azure HDInsight, or GCP Dataproc). Exposure to Agile or DevOps practices in Big Data project environments. What We Offer: Opportunity to work on challenging, high-impact Big Data projects. Leadership role in shaping and mentoring the next generation of engineers. Supportive and collaborative team culture. Flexible working environment Competitive compensation and professional growth opportunities.
Posted 1 week ago
5.0 - 10.0 years
27 - 40 Lacs
Noida, Pune, Bengaluru
Work from Office
Description: We are seeking a highly skilled Senior Data Engineer with strong expertise in Python development and MySQL, along with hands-on experience in Big Data technologies, PySpark, and cloud platforms such as AWS, GCP, or Azure. The ideal candidate will play a critical role in designing and developing scalable data pipelines and infrastructure to support advanced analytics and data-driven decision-making across teams. Requirements: 7 to 12 years of overall experience in data engineering or related domains. Proven ability to work independently on analytics engines like Big Data and PySpark. Strong hands-on experience in Python programming, with a focus on data handling and backend services. Proficiency in MySQL, with the ability to write and optimize complex queries; knowledge of Redis is a plus. Solid understanding and hands-on experience with public cloud services (AWS, GCP, or Azure). Familiarity with monitoring tools such as Grafana, ELK, Loki, and Prometheus. Experience with IaC tools like Terraform and Helm. Proficiency in containerization and orchestration using Docker and Kubernetes. Strong collaboration and communication skills to work in agile and cross-functional environments. Job Responsibilities: Design, develop, and maintain robust data pipelines using Big Data and PySpark for ETL/ELT processes. Build scalable and efficient data solutions across cloud platforms (AWS/GCP/Azure) using modern tools and technologies Write high-quality, maintainable, and efficient code in Python for data engineering tasks. Develop and optimize complex queries using MySQL and work with caching systems like Redis. Implement monitoring and logging using Grafana, ELK, Loki, and Prometheus to ensure system reliability and performance. Use Terraform and Helm for infrastructure provisioning and automation (Infrastructure as Code). Leverage Docker and Kubernetes for containerization and orchestration of services. Collaborate with cross-functional teams including engineering, product, and analytics to deliver impactful data solutions. Contribute to system architecture decisions and influence best practices in cloud data infrastructure. What We Offer: Exciting Projects: We focus on industries like High-Tech, communication, media, healthcare, retail and telecom. Our customer list is full of fantastic global brands and leaders who love what we build for them. Collaborative Environment: You Can expand your skills by collaborating with a diverse team of highly talented people in an open, laidback environment — or even abroad in one of our global centers or client facilities! Work-Life Balance: GlobalLogic prioritizes work-life balance, which is why we offer flexible work schedules, opportunities to work from home, and paid time off and holidays. Professional Development: Our dedicated Learning & Development team regularly organizes Communication skills training(GL Vantage, Toast Master),Stress Management program, professional certifications, and technical and soft skill trainings. Excellent Benefits: We provide our employees with competitive salaries, family medical insurance, Group Term Life Insurance, Group Personal Accident Insurance , NPS(National Pension Scheme ), Periodic health awareness program, extended maternity leave, annual performance bonuses, and referral bonuses. Fun Perks: We want you to love where you work, which is why we host sports events, cultural activities, offer food on subsidies rates, Corporate parties. Our vibrant offices also include dedicated GL Zones, rooftop decks and GL Club where you can drink coffee or tea with your colleagues over a game of table and offer discounts for popular stores and restaurants!
Posted 1 week ago
3.0 years
0 Lacs
India
On-site
In this Job, you will: As a Gen AI Engineer at Telerapp, you will play a crucial role in designing, developing, and implementing cutting-edge generative artificial intelligence models and systems. Your expertise will contribute to creating AI-powered solutions that generate high-quality clinically relevant texts while pushing the boundaries of creativity and innovation. You will collaborate closely with cross-functional teams to deliver AI-driven products that reshape healthcare industries and user experiences. This role requires a deep understanding of machine learning, neural networks, and generative modeling techniques. To join, you should: Have 3+ years of full-time industry experience. Have hands-on experience in contemporary AI, such as, training generative AI models such as LLMs and image-to-text models, improving upon pre-trained models, evaluating these models, feedback loop etc. Have specialized expertise in model fine-tuning, RLHF, RAG, LLM tool use, etc. Have experience with LLM prompt engineering and familiarity with LLM-based workflows/architectures. Have proficiency in Python, PySpark, TensorFlow, PyTorch, Keras, Transformer, and cloud platforms such as Google Cloud Platform (GCP) or Vertex AI or similar platforms. Have to collaborate with software engineers to integrate generative models into production systems. Ensure scalability, reliability, and efficiency of the deployed models. Have experience in effective data visualization approaches and a keen eye for detail in the visual communication of findings. Your Responsibilities: Develop new LLMs for medical imaging. Developing and implementing methods that improve training efficiency and extend or improve LLM capabilities, reliability, and safety in the realm of image-to-text generation using medical data. Perform data preprocessing, indexing, and feature engineering specific to healthcare image and text data. Keep up to date with the research literature and think beyond the state of the art to address the needs of our users. Preferred Qualifications: Master’s degree in a quantitative field such as Computer Science or Data Science Salary: Market competitive For immediate consideration, send your CV at hr@telerapps.com.
Posted 1 week ago
7.0 - 12.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Description We are seeking a highly skilled Senior Data Engineer with strong expertise in Python development and MySQL, along with hands-on experience in Big Data technologies, PySpark, and cloud platforms such as AWS, GCP, or Azure. The ideal candidate will play a critical role in designing and developing scalable data pipelines and infrastructure to support advanced analytics and data-driven decision-making across teams. Requirements 7 to 12 years of overall experience in data engineering or related domains. Proven ability to work independently on analytics engines like Big Data and PySpark. Strong hands-on experience in Python programming, with a focus on data handling and backend services. Proficiency in MySQL, with the ability to write and optimize complex queries; knowledge of Redis is a plus. Solid understanding and hands-on experience with public cloud services (AWS, GCP, or Azure). Familiarity with monitoring tools such as Grafana, ELK, Loki, and Prometheus. Experience with IaC tools like Terraform and Helm. Proficiency in containerization and orchestration using Docker and Kubernetes. Strong collaboration and communication skills to work in agile and cross-functional environments. Job responsibilities Design, develop, and maintain robust data pipelines using Big Data and PySpark for ETL/ELT processes. Build scalable and efficient data solutions across cloud platforms (AWS/GCP/Azure) using modern tools and technologies Write high-quality, maintainable, and efficient code in Python for data engineering tasks. Develop and optimize complex queries using MySQL and work with caching systems like Redis. Implement monitoring and logging using Grafana, ELK, Loki, and Prometheus to ensure system reliability and performance. Use Terraform and Helm for infrastructure provisioning and automation (Infrastructure as Code). Leverage Docker and Kubernetes for containerization and orchestration of services. Collaborate with cross-functional teams including engineering, product, and analytics to deliver impactful data solutions. Contribute to system architecture decisions and influence best practices in cloud data infrastructure. What we offer Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally. Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today. Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way! High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner to the world’s largest and most forward-thinking companies. Since 2000, we’ve been at the forefront of the digital revolution – helping create some of the most innovative and widely used digital products and experiences. Today we continue to collaborate with clients in transforming businesses and redefining industries through intelligent products, platforms, and services.
Posted 1 week ago
7.0 - 12.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Description We are seeking a highly skilled Senior Data Engineer with strong expertise in Python development and MySQL, along with hands-on experience in Big Data technologies, PySpark, and cloud platforms such as AWS, GCP, or Azure. The ideal candidate will play a critical role in designing and developing scalable data pipelines and infrastructure to support advanced analytics and data-driven decision-making across teams. Requirements 7 to 12 years of overall experience in data engineering or related domains. Proven ability to work independently on analytics engines like Big Data and PySpark. Strong hands-on experience in Python programming, with a focus on data handling and backend services. Proficiency in MySQL, with the ability to write and optimize complex queries; knowledge of Redis is a plus. Solid understanding and hands-on experience with public cloud services (AWS, GCP, or Azure). Familiarity with monitoring tools such as Grafana, ELK, Loki, and Prometheus. Experience with IaC tools like Terraform and Helm. Proficiency in containerization and orchestration using Docker and Kubernetes. Strong collaboration and communication skills to work in agile and cross-functional environments. Job responsibilities Design, develop, and maintain robust data pipelines using Big Data and PySpark for ETL/ELT processes. Build scalable and efficient data solutions across cloud platforms (AWS/GCP/Azure) using modern tools and technologies Write high-quality, maintainable, and efficient code in Python for data engineering tasks. Develop and optimize complex queries using MySQL and work with caching systems like Redis. Implement monitoring and logging using Grafana, ELK, Loki, and Prometheus to ensure system reliability and performance. Use Terraform and Helm for infrastructure provisioning and automation (Infrastructure as Code). Leverage Docker and Kubernetes for containerization and orchestration of services. Collaborate with cross-functional teams including engineering, product, and analytics to deliver impactful data solutions. Contribute to system architecture decisions and influence best practices in cloud data infrastructure. What we offer Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally. Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today. Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way! High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner to the world’s largest and most forward-thinking companies. Since 2000, we’ve been at the forefront of the digital revolution – helping create some of the most innovative and widely used digital products and experiences. Today we continue to collaborate with clients in transforming businesses and redefining industries through intelligent products, platforms, and services.
Posted 1 week ago
4.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Expedia Group brands power global travel for everyone, everywhere. We design cutting-edge tech to make travel smoother and more memorable, and we create groundbreaking solutions for our partners. Our diverse, vibrant, and welcoming community is essential in driving our success. Why Join Us? To shape the future of travel, people must come first. Guided by our Values and Leadership Agreements, we foster an open culture where everyone belongs, differences are celebrated and know that when one of us wins, we all win. We provide a full benefits package, including exciting travel perks, generous time-off, parental leave, a flexible work model (with some pretty cool offices), and career development resources, all to fuel our employees' passion for travel and ensure a rewarding career journey. We’re building a more open world. Join us. Machine Learning Engineer II (Big-Data Heavy) Are you fascinated by data and building robust data and machine learning pipelines which process massive amounts of data at scale and speed to provide crucial insights to the end customer? This is exactly what we, Data & AI ML organization in Expedia, do. Our team is looking for a Machine Learning Engineer to join our Machine Learning Engineering team in Gurgaon. Our team works very closely with Machine Learning Scientists in a fast-paced Agile environment to create and productionize algorithms that directly impacts the traveler journey and experience. We believe in being Different. We seek new ideas, different ways of thinking, diverse backgrounds and approaches, because averages can lie and sameness is dangerous. Expedia is committed to crafting an inclusive work environment with a diverse workforce. All qualified applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability or age. Experience 4+ years for Bachelor's 2+ years for Master's What You'll Do Work in a cross-functional team of Machine Learning engineers and ML Scientists to design and code large scale batch and real-time data pipelines on the AWS. Prototype creative solutions quickly by developing minimum viable products and work with seniors and peers in crafting and implementing the technical vision of the team Communicate and work effectively with geographically distributed cross functional teams Participate in code reviews to assess overall code quality and flexibility Resolve problems and roadblocks as they occur with peers and help unblock junior members of the team. Follow through on details and drive issues to closure Define, develop and maintain artifacts like technical design or partner documentation Drive for continuous improvement in software and development process within an agile development team Participate in user story creation in collaboration with the team Support and troubleshoot data and/or system issues as needed Who You Are Degree in software engineering, computer science, informatics or a similar field. Developed software in a team environment of at least 5 engineers (agile, version control, etc.). Built and maintained a software project/product in production environments in public/hybrid cloud infrastructure. Comfortable programming in Python and Scala. Hands-on experience with OOAD, design patterns, SQL and NoSQL. Knowledgeable in big data technologies, in particular Spark, Hive, Hue, Qubole and Databricks. Experience in crafting real-time streaming applications, preferably in Spark streaming, and Kafka/KStreams. Must-have experience: solid experience working on Big Data, good understanding of ML pipelines and ML lifecycle, Batch processing and Inferencing applications, PySpark, SQL Experience of using cloud services (e.g. AWS). Experience with workflow orchestration tools (e.g. Airflow). Passionate about learning, especially in the areas of micro-services, system architecture, Data Science and Machine Learning. Experience working with Agile/Scrum methodologies. Good-to-have experience: in real-time / live processing and inferencing applications Accommodation requests If you need assistance with any part of the application or recruiting process due to a disability, or other physical or mental health conditions, please reach out to our Recruiting Accommodations Team through the Accommodation Request. We are proud to be named as a Best Place to Work on Glassdoor in 2024 and be recognized for award-winning culture by organizations like Forbes, TIME, Disability:IN, and others. Expedia Group's family of brands includes: Brand Expedia®, Hotels.com®, Expedia® Partner Solutions, Vrbo®, trivago®, Orbitz®, Travelocity®, Hotwire®, Wotif®, ebookers®, CheapTickets®, Expedia Group™ Media Solutions, Expedia Local Expert®, CarRentals.com™, and Expedia Cruises™. © 2024 Expedia, Inc. All rights reserved. Trademarks and logos are the property of their respective owners. CST: 2029030-50 Employment opportunities and job offers at Expedia Group will always come from Expedia Group’s Talent Acquisition and hiring teams. Never provide sensitive, personal information to someone unless you’re confident who the recipient is. Expedia Group does not extend job offers via email or any other messaging tools to individuals with whom we have not made prior contact. Our email domain is @expediagroup.com. The official website to find and apply for job openings at Expedia Group is careers.expediagroup.com/jobs. Expedia is committed to creating an inclusive work environment with a diverse workforce. All qualified applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability or age.
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Responsible for designing, developing, and optimizing data processing solutions using a combination of Big Data technologies. Focus on building scalable and efficient data pipelines for handling large datasets and enabling batch & real-time data streaming and processing. Responsibilities: > Develop Spark applications using Scala or Python (Pyspark) for data transformation, aggregation, and analysis. > Develop and maintain Kafka-based data pipelines: This includes designing Kafka Streams, setting up Kafka Clusters, and ensuring efficient data flow. > Create and optimize Spark applications using Scala and PySpark: They leverage these languages to process large datasets and implement data transformations and aggregations. > Integrate Kafka with Spark for real-time processing: They build systems that ingest real-time data from Kafka and process it using Spark Streaming or Structured Streaming. > Collaborate with data teams: This includes data engineers, data scientists, and DevOps, to design and implement data solutions. > Tune and optimize Spark and Kafka clusters: Ensuring high performance, scalability, and efficiency of data processing workflows. > Write clean, functional, and optimized code: Adhering to coding standards and best practices. > Troubleshoot and resolve issues: Identifying and addressing any problems related to Kafka and Spark applications. > Maintain documentation: Creating and maintaining documentation for Kafka configurations, Spark jobs, and other processes. > Stay updated on technology trends: Continuously learning and applying new advancements in functional programming, big data, and related technologies. Proficiency in: Hadoop ecosystem big data tech stack(HDFS, YARN, MapReduce, Hive, Impala). Spark (Scala, Python) for data processing and analysis. Kafka for real-time data ingestion and processing. ETL processes and data ingestion tools Deep hands-on expertise in Pyspark, Scala, Kafka Programming Languages: Scala, Python, or Java for developing Spark applications. SQL for data querying and analysis. Other Skills: Data warehousing concepts. Linux/Unix operating systems. Problem-solving and analytical skills. Version control systems ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 1 week ago
9.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Job Title: - PLSQL Developer + Python Candidate Specifications Candidate should have 9+ years of experience. Job Description Candidates should have 9+ years of experience in Python and Pyspark Candidate should have strong experience in AWS and PLSQL. Candidates should be strong in Data management with data governance and data streaming along with data lakes and data-warehouse Candidates should also have exposure in Team handling and stakeholder management skills. Candidate should have excellent in written and verbal communication skills. Skills Required RolePLSQL Developer + Python Industry TypeIT/ Computers - Software Functional AreaIT-Software Required Education Bachelor Degree Employment TypeFull Time, Permanent Key Skills PYTHON PLSQL DATA MANAGEMENT AWS Other Information Job CodeGO/JC/701/2025 Recruiter NameSheena Rakesh
Posted 1 week ago
5.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Focused on relationships, you are building meaningful client connections, and learning how to manage and inspire others. Navigating increasingly complex situations, you are growing your personal brand, deepening technical expertise and awareness of your strengths. You are expected to anticipate the needs of your teams and clients, and to deliver quality. Embracing increased ambiguity, you are comfortable when the path forward isn’t clear, you ask questions, and you use these moments as opportunities to grow. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Respond effectively to the diverse perspectives, needs, and feelings of others. Use a broad range of tools, methodologies and techniques to generate new ideas and solve problems. Use critical thinking to break down complex concepts. Understand the broader objectives of your project or role and how your work fits into the overall strategy. Develop a deeper understanding of the business context and how it is changing. Use reflection to develop self awareness, enhance strengths and address development areas. Interpret data to inform insights and recommendations. Uphold and reinforce professional and technical standards (e.g. refer to specific PwC tax and audit guidance), the Firm's code of conduct, and independence requirements. Below are examples of role/skills profiles used by the UK firm when hiring Data Analytics based roles indicated above. Job Description & Summary Operate is the firm's delivery engine, serving as the orchestrator of services across the organisation. It is a global team of delivery professionals united by a commitment to excellence and impact. Operate has built a strong reputation for collaboration, mobilising quickly, and effectively getting tasks done. It aims to build a world-class delivery capability, focusing on evolving operational delivery, embedding automation and AI, and raising the bar for quality and consistency. The goal is to add strategic value for clients and contribute to the firm’s ambition of pre-eminence in the market. Team members in Operate are provided with meaningful opportunities to lead, learn, and grow, embracing a future-ready workforce trained in cutting-edge technology. Operate ensures clients can access a single front door to global delivery chains, providing tailored, high-quality solutions to meet evolving challenges. The role will be based in Kolkata. However, with a diverse range of clients and projects, you'll occasionally have the exciting opportunity to work in various locations, offering exposure to different industries and cultures. This flexibility opens doors to unique networking experiences and accelerated career growth, enriching your professional journey. Your willingness and ability to do this will be discussed as part of the recruitment process. Candidates who prefer not to travel will still be considered. Role Description As a pivotal member of our data team, Senior Associates are key in shaping and refining data management and analytics functions, including our expanding Data Services. You will be instrumental in helping us deliver value-driven insights by designing, integrating, and analysing cutting-edge data systems. The role emphasises leveraging the latest technologies, particularly within the Microsoft ecosystem, to enhance operational capabilities and drive innovation. You'll work on diverse and challenging projects, allowing you to actively influence strategic decisions and develop innovative solutions. This, in turn, paves the way for unparalleled professional growth and the development of a forward-thinking mindset. As you contribute to our Data Services, you'll have a front-row seat to the future of data analytics, providing an enriching environment to build expertise and expand your career horizons. Key Activities Include, But Are Not Limited To Design and implement data integration processes. Manage data projects with multiple stakeholders and tight timelines. Developing data models and frameworks that enhance data governance and efficiency. Addressing challenges related to data integration, quality, and management processes. Implementing best practices in automation to streamline data workflows. Engaging with key stakeholders to extract, interpret, and translate data requirements into meaningful insights and solutions. Engage with clients to understand and deliver data solutions. Work collaboratively to meet project goals. Lead and mentor junior team members. Essential Requirements More than 5 years of experience in data analytics, with proficiency in managing large datasets and crafting detailed reports. Proficient in Python Experience working within a Microsoft Azure environment. Experience with data warehousing and data modelling (e.g., dimensional modelling, data mesh, data fabric). Proficiency in PySpark/Databricks/Snowflake/MS Fabric, and intermediate SQL skills. Experience with orchestration tools such as Azure Data Factory (ADF), Airflow, or DBT. Familiarity with DevOps practices, specifically creating CI/CD and release pipelines. Knowledge of Azure DevOps tools and GitHub. Knowledge of Azure SQL DB or any other RDBMS system. Basic knowledge of GenAI. Additional Skills / Experiences That Will Be Beneficial Understanding of data governance frameworks. Awareness of Power Automate functionalities. Why Join Us? This role isn't just about the technical expertise—it’s about being part of something transformational. You'll be part of a vibrant team where growth opportunities are vast and where your contributions directly impact our mission to break new ground in data services. With a work culture that values innovation, collaboration, and personal growth, joining PwC's Operate Data Analytics team offers you the chance to shape the future of operational and data service solutions with creativity and foresight. Dive into exciting projects, challenge the status quo, and drive the narrative forward!
Posted 1 week ago
0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Driven by curiosity, you are a reliable, contributing member of a team. In our fast-paced environment, you are expected to adapt to working with a variety of clients and team members, each presenting varying challenges and scope. Every experience is an opportunity to learn and grow. You are expected to take ownership and consistently deliver quality work that drives value for our clients and success as a team. As you navigate through the Firm, you build a brand for yourself, opening doors to more opportunities. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Apply a learning mindset and take ownership for your own development. Appreciate diverse perspectives, needs, and feelings of others. Adopt habits to sustain high performance and develop your potential. Actively listen, ask questions to check understanding, and clearly express ideas. Seek, reflect, act on, and give feedback. Gather information from a range of sources to analyse facts and discern patterns. Commit to understanding how the business works and building commercial awareness. Learn and apply professional and technical standards (e.g. refer to specific PwC tax and audit guidance), uphold the Firm's code of conduct and independence requirements. Below are examples of role/skills profiles used by the UK firm when hiring Data Analytics based roles indicated above. Job Description & Summary Operate is the firm's delivery engine, serving as the orchestrator of services across the organisation. It is a global team of delivery professionals united by a commitment to excellence and impact. Operate has built a strong reputation for collaboration, mobilising quickly, and effectively getting tasks done. It aims to build a world-class delivery capability, focusing on evolving operational delivery, embedding automation and AI, and raising the bar for quality and consistency. The goal is to add strategic value for clients and contribute to the firm’s ambition of pre-eminence in the market. Team members in Operate are provided with meaningful opportunities to lead, learn, and grow, embracing a future-ready workforce trained in cutting-edge technology. Operate ensures clients can access a single front door to global delivery chains, providing tailored, high-quality solutions to meet evolving challenges. The role will be based in Kolkata. However, with a diverse range of clients and projects, you'll occasionally have the exciting opportunity to work in various locations, offering exposure to different industries and cultures. This flexibility opens doors to unique networking experiences and accelerated career growth, enriching your professional journey. Your willingness and ability to do this will be discussed as part of the recruitment process. Candidates who prefer not to travel will still be considered. Role Description As an integral part of our data team, Associate 2 professionals contribute significantly to the development of data management and analytics functions, including our growing Data Services. In this role, you'll assist engagement teams in delivering meaningful insights by helping design, integrate, and analyse data systems. You will work with the latest technologies, especially within the Microsoft ecosystem, to enhance our operational capabilities. Working on a variety of projects, you'll have the chance to contribute your ideas and support innovative solutions. This experience offers opportunities for professional growth and helps cultivate a forward-thinking mindset. As you support our Data Services, you'll gain exposure to the evolving field of data analytics, providing an excellent foundation for building expertise and expanding your career journey. Key Activities Include, But Are Not Limited To Assisting in the development of data models and frameworks to enhance data governance and efficiency. Supporting efforts to address data integration, quality, and management process challenges. Participating in the implementation of best practices in automation to streamline data workflows. Collaborating with stakeholders to gather, interpret, and translate data requirements into practical insights and solutions. Support management of data projects alongside senior team members. Assist in engaging with clients to understand their data needs. Work effectively as part of a team to achieve project goals. Essential Requirements At least two years of experience in data analytics, with a focus on handling large datasets and supporting the creation of detailed reports. Familiarity with Python and experience in working within a Microsoft Azure environment. Exposure to data warehousing and data modelling techniques (e.g., dimensional modelling). Basic proficiency in PySpark and Databricks/Snowflake/MS Fabric, with foundational SQL skills. Experience with orchestration tools like Azure Data Factory (ADF), Airflow, or DBT. Awareness of DevOps practices, including introducing CI/CD and release pipelines. Familiarity with Azure DevOps tools and GitHub. Basic understanding of Azure SQL DB or other RDBMS systems. Introductory knowledge of GenAI concepts. Additional Skills / Experiences That Will Be Beneficial Understanding of data governance frameworks. Awareness of Power Automate functionalities. WHY JOIN US? This role is not just about the technical expertise—it’s about being part of something transformational. You'll be part of a vibrant team where growth opportunities are vast and where your contributions directly impact our mission to break new ground in data services. With a work culture that values innovation, collaboration, and personal growth, joining PwC's Operate Data Analytics team offers you the chance to shape the future of operational and data service solutions with creativity and foresight. Dive into exciting projects, challenge the status quo, and drive the narrative forward!
Posted 1 week ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Driven by curiosity, you are a reliable, contributing member of a team. In our fast-paced environment, you are expected to adapt to working with a variety of clients and team members, each presenting varying challenges and scope. Every experience is an opportunity to learn and grow. You are expected to take ownership and consistently deliver quality work that drives value for our clients and success as a team. As you navigate through the Firm, you build a brand for yourself, opening doors to more opportunities. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Apply a learning mindset and take ownership for your own development. Appreciate diverse perspectives, needs, and feelings of others. Adopt habits to sustain high performance and develop your potential. Actively listen, ask questions to check understanding, and clearly express ideas. Seek, reflect, act on, and give feedback. Gather information from a range of sources to analyse facts and discern patterns. Commit to understanding how the business works and building commercial awareness. Learn and apply professional and technical standards (e.g. refer to specific PwC tax and audit guidance), uphold the Firm's code of conduct and independence requirements. Below are examples of role/skills profiles used by the UK firm when hiring Data Analytics based roles indicated above. Job Description & Summary Operate is the firm's delivery engine, serving as the orchestrator of services across the organisation. It is a global team of delivery professionals united by a commitment to excellence and impact. Operate has built a strong reputation for collaboration, mobilising quickly, and effectively getting tasks done. It aims to build a world-class delivery capability, focusing on evolving operational delivery, embedding automation and AI, and raising the bar for quality and consistency. The goal is to add strategic value for clients and contribute to the firm’s ambition of pre-eminence in the market. Team members in Operate are provided with meaningful opportunities to lead, learn, and grow, embracing a future-ready workforce trained in cutting-edge technology. Operate ensures clients can access a single front door to global delivery chains, providing tailored, high-quality solutions to meet evolving challenges. The role will be based in Kolkata. However, with a diverse range of clients and projects, you'll occasionally have the exciting opportunity to work in various locations, offering exposure to different industries and cultures. This flexibility opens doors to unique networking experiences and accelerated career growth, enriching your professional journey. Your willingness and ability to do this will be discussed as part of the recruitment process. Candidates who prefer not to travel will still be considered. Role Description As an integral part of our data team, Associate 2 professionals contribute significantly to the development of data management and analytics functions, including our growing Data Services. In this role, you'll assist engagement teams in delivering meaningful insights by helping design, integrate, and analyse data systems. You will work with the latest technologies, especially within the Microsoft ecosystem, to enhance our operational capabilities. Working on a variety of projects, you'll have the chance to contribute your ideas and support innovative solutions. This experience offers opportunities for professional growth and helps cultivate a forward-thinking mindset. As you support our Data Services, you'll gain exposure to the evolving field of data analytics, providing an excellent foundation for building expertise and expanding your career journey. Key Activities Include, But Are Not Limited To Assisting in the development of data models and frameworks to enhance data governance and efficiency. Supporting efforts to address data integration, quality, and management process challenges. Participating in the implementation of best practices in automation to streamline data workflows. Collaborating with stakeholders to gather, interpret, and translate data requirements into practical insights and solutions. Support management of data projects alongside senior team members. Assist in engaging with clients to understand their data needs. Work effectively as part of a team to achieve project goals. Essential Requirements At least two years of experience in data analytics, with a focus on handling large datasets and supporting the creation of detailed reports. Familiarity with Python and experience in working within a Microsoft Azure environment. Exposure to data warehousing and data modelling techniques (e.g., dimensional modelling). Basic proficiency in PySpark and Databricks/Snowflake/MS Fabric, with foundational SQL skills. Experience with orchestration tools like Azure Data Factory (ADF), Airflow, or DBT. Awareness of DevOps practices, including introducing CI/CD and release pipelines. Familiarity with Azure DevOps tools and GitHub. Basic understanding of Azure SQL DB or other RDBMS systems. Introductory knowledge of GenAI concepts. Additional Skills / Experiences That Will Be Beneficial Understanding of data governance frameworks. Awareness of Power Automate functionalities. WHY JOIN US? This role is not just about the technical expertise—it’s about being part of something transformational. You'll be part of a vibrant team where growth opportunities are vast and where your contributions directly impact our mission to break new ground in data services. With a work culture that values innovation, collaboration, and personal growth, joining PwC's Operate Data Analytics team offers you the chance to shape the future of operational and data service solutions with creativity and foresight. Dive into exciting projects, challenge the status quo, and drive the narrative forward!
Posted 1 week ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Focused on relationships, you are building meaningful client connections, and learning how to manage and inspire others. Navigating increasingly complex situations, you are growing your personal brand, deepening technical expertise and awareness of your strengths. You are expected to anticipate the needs of your teams and clients, and to deliver quality. Embracing increased ambiguity, you are comfortable when the path forward isn’t clear, you ask questions, and you use these moments as opportunities to grow. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Respond effectively to the diverse perspectives, needs, and feelings of others. Use a broad range of tools, methodologies and techniques to generate new ideas and solve problems. Use critical thinking to break down complex concepts. Understand the broader objectives of your project or role and how your work fits into the overall strategy. Develop a deeper understanding of the business context and how it is changing. Use reflection to develop self awareness, enhance strengths and address development areas. Interpret data to inform insights and recommendations. Uphold and reinforce professional and technical standards (e.g. refer to specific PwC tax and audit guidance), the Firm's code of conduct, and independence requirements. Below are examples of role/skills profiles used by the UK firm when hiring Data Analytics based roles indicated above. Job Description & Summary Operate is the firm's delivery engine, serving as the orchestrator of services across the organisation. It is a global team of delivery professionals united by a commitment to excellence and impact. Operate has built a strong reputation for collaboration, mobilising quickly, and effectively getting tasks done. It aims to build a world-class delivery capability, focusing on evolving operational delivery, embedding automation and AI, and raising the bar for quality and consistency. The goal is to add strategic value for clients and contribute to the firm’s ambition of pre-eminence in the market. Team members in Operate are provided with meaningful opportunities to lead, learn, and grow, embracing a future-ready workforce trained in cutting-edge technology. Operate ensures clients can access a single front door to global delivery chains, providing tailored, high-quality solutions to meet evolving challenges. The role will be based in Kolkata. However, with a diverse range of clients and projects, you'll occasionally have the exciting opportunity to work in various locations, offering exposure to different industries and cultures. This flexibility opens doors to unique networking experiences and accelerated career growth, enriching your professional journey. Your willingness and ability to do this will be discussed as part of the recruitment process. Candidates who prefer not to travel will still be considered. Role Description As a pivotal member of our data team, Senior Associates are key in shaping and refining data management and analytics functions, including our expanding Data Services. You will be instrumental in helping us deliver value-driven insights by designing, integrating, and analysing cutting-edge data systems. The role emphasises leveraging the latest technologies, particularly within the Microsoft ecosystem, to enhance operational capabilities and drive innovation. You'll work on diverse and challenging projects, allowing you to actively influence strategic decisions and develop innovative solutions. This, in turn, paves the way for unparalleled professional growth and the development of a forward-thinking mindset. As you contribute to our Data Services, you'll have a front-row seat to the future of data analytics, providing an enriching environment to build expertise and expand your career horizons. Key Activities Include, But Are Not Limited To Design and implement data integration processes. Manage data projects with multiple stakeholders and tight timelines. Developing data models and frameworks that enhance data governance and efficiency. Addressing challenges related to data integration, quality, and management processes. Implementing best practices in automation to streamline data workflows. Engaging with key stakeholders to extract, interpret, and translate data requirements into meaningful insights and solutions. Engage with clients to understand and deliver data solutions. Work collaboratively to meet project goals. Lead and mentor junior team members. Essential Requirements More than 5 years of experience in data analytics, with proficiency in managing large datasets and crafting detailed reports. Proficient in Python Experience working within a Microsoft Azure environment. Experience with data warehousing and data modelling (e.g., dimensional modelling, data mesh, data fabric). Proficiency in PySpark/Databricks/Snowflake/MS Fabric, and intermediate SQL skills. Experience with orchestration tools such as Azure Data Factory (ADF), Airflow, or DBT. Familiarity with DevOps practices, specifically creating CI/CD and release pipelines. Knowledge of Azure DevOps tools and GitHub. Knowledge of Azure SQL DB or any other RDBMS system. Basic knowledge of GenAI. Additional Skills / Experiences That Will Be Beneficial Understanding of data governance frameworks. Awareness of Power Automate functionalities. Why Join Us? This role isn't just about the technical expertise—it’s about being part of something transformational. You'll be part of a vibrant team where growth opportunities are vast and where your contributions directly impact our mission to break new ground in data services. With a work culture that values innovation, collaboration, and personal growth, joining PwC's Operate Data Analytics team offers you the chance to shape the future of operational and data service solutions with creativity and foresight. Dive into exciting projects, challenge the status quo, and drive the narrative forward!
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Driven by curiosity, you are a reliable, contributing member of a team. In our fast-paced environment, you are expected to adapt to working with a variety of clients and team members, each presenting varying challenges and scope. Every experience is an opportunity to learn and grow. You are expected to take ownership and consistently deliver quality work that drives value for our clients and success as a team. As you navigate through the Firm, you build a brand for yourself, opening doors to more opportunities. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Apply a learning mindset and take ownership for your own development. Appreciate diverse perspectives, needs, and feelings of others. Adopt habits to sustain high performance and develop your potential. Actively listen, ask questions to check understanding, and clearly express ideas. Seek, reflect, act on, and give feedback. Gather information from a range of sources to analyse facts and discern patterns. Commit to understanding how the business works and building commercial awareness. Learn and apply professional and technical standards (e.g. refer to specific PwC tax and audit guidance), uphold the Firm's code of conduct and independence requirements. Below are examples of role/skills profiles used by the UK firm when hiring Data Analytics based roles indicated above. Job Description & Summary Operate is the firm's delivery engine, serving as the orchestrator of services across the organisation. It is a global team of delivery professionals united by a commitment to excellence and impact. Operate has built a strong reputation for collaboration, mobilising quickly, and effectively getting tasks done. It aims to build a world-class delivery capability, focusing on evolving operational delivery, embedding automation and AI, and raising the bar for quality and consistency. The goal is to add strategic value for clients and contribute to the firm’s ambition of pre-eminence in the market. Team members in Operate are provided with meaningful opportunities to lead, learn, and grow, embracing a future-ready workforce trained in cutting-edge technology. Operate ensures clients can access a single front door to global delivery chains, providing tailored, high-quality solutions to meet evolving challenges. The role will be based in Kolkata. However, with a diverse range of clients and projects, you'll occasionally have the exciting opportunity to work in various locations, offering exposure to different industries and cultures. This flexibility opens doors to unique networking experiences and accelerated career growth, enriching your professional journey. Your willingness and ability to do this will be discussed as part of the recruitment process. Candidates who prefer not to travel will still be considered. Role Description As an integral part of our data team, Associate 2 professionals contribute significantly to the development of data management and analytics functions, including our growing Data Services. In this role, you'll assist engagement teams in delivering meaningful insights by helping design, integrate, and analyse data systems. You will work with the latest technologies, especially within the Microsoft ecosystem, to enhance our operational capabilities. Working on a variety of projects, you'll have the chance to contribute your ideas and support innovative solutions. This experience offers opportunities for professional growth and helps cultivate a forward-thinking mindset. As you support our Data Services, you'll gain exposure to the evolving field of data analytics, providing an excellent foundation for building expertise and expanding your career journey. Key Activities Include, But Are Not Limited To Assisting in the development of data models and frameworks to enhance data governance and efficiency. Supporting efforts to address data integration, quality, and management process challenges. Participating in the implementation of best practices in automation to streamline data workflows. Collaborating with stakeholders to gather, interpret, and translate data requirements into practical insights and solutions. Support management of data projects alongside senior team members. Assist in engaging with clients to understand their data needs. Work effectively as part of a team to achieve project goals. Essential Requirements At least two years of experience in data analytics, with a focus on handling large datasets and supporting the creation of detailed reports. Familiarity with Python and experience in working within a Microsoft Azure environment. Exposure to data warehousing and data modelling techniques (e.g., dimensional modelling). Basic proficiency in PySpark and Databricks/Snowflake/MS Fabric, with foundational SQL skills. Experience with orchestration tools like Azure Data Factory (ADF), Airflow, or DBT. Awareness of DevOps practices, including introducing CI/CD and release pipelines. Familiarity with Azure DevOps tools and GitHub. Basic understanding of Azure SQL DB or other RDBMS systems. Introductory knowledge of GenAI concepts. Additional Skills / Experiences That Will Be Beneficial Understanding of data governance frameworks. Awareness of Power Automate functionalities. WHY JOIN US? This role is not just about the technical expertise—it’s about being part of something transformational. You'll be part of a vibrant team where growth opportunities are vast and where your contributions directly impact our mission to break new ground in data services. With a work culture that values innovation, collaboration, and personal growth, joining PwC's Operate Data Analytics team offers you the chance to shape the future of operational and data service solutions with creativity and foresight. Dive into exciting projects, challenge the status quo, and drive the narrative forward!
Posted 1 week ago
7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Key Responsibilities: Partner with business, product, and engineering teams to define problem statements, evaluate feasibility, and design AI/ML-driven solutions that deliver measurable business value. Lead and execute end-to-end AI/ML projects — from data exploration and model development to validation, deployment, and monitoring in production. Drive solution architecture using techniques in data engineering, programming, machine learning, NLP, and Generative AI. Champion the scalability, reproducibility, and sustainability of AI solutions by establishing best practices in model development, CI/CD, and performance tracking. Guide junior and associate AI/ML engineers through technical mentoring, code reviews, and solution reviews. Identify and evangelize the adoption of emerging tools, technologies, and methodologies across teams. Translate technical outputs into actionable insights for business stakeholders through storytelling, data visualizations, and stakeholder engagement. We are looking for: A seasoned AI/ML engineer with 7+ years of hands-on experience delivering enterprise-grade AI/ML solutions. Advanced proficiency in Python, SQL, PySpark, and experience working with cloud platforms (Azure preferred) and tools such as Databricks, Synapse, ADF, and Web Apps. Strong expertise in applied text analytics, NLP, and Generative AI, with real-world deployment exposure. Solid understanding of model evaluation, optimization, bias mitigation, and monitoring in production. A problem solver with scientific rigor, strong business acumen, and the ability to bridge the gap between data and decisions. Prior experience in leading cross-functional AI initiatives or collaborating with engineering teams to deploy ML pipelines. Bachelor's or master’s degree in computer science, Engineering, Statistics, or a related quantitative field. A PhD is a plus. Prior understanding in business domain of shipping and logistics is an advantage. Maersk is committed to a diverse and inclusive workplace, and we embrace different styles of thinking. Maersk is an equal opportunities employer and welcomes applicants without regard to race, colour, gender, sex, age, religion, creed, national origin, ancestry, citizenship, marital status, sexual orientation, physical or mental disability, medical condition, pregnancy or parental leave, veteran status, gender identity, genetic information, or any other characteristic protected by applicable law. We will consider qualified applicants with criminal histories in a manner consistent with all legal requirements. We are happy to support your need for any adjustments during the application and hiring process. If you need special assistance or an accommodation to use our website, apply for a position, or to perform a job, please contact us by emailing accommodationrequests@maersk.com.
Posted 1 week ago
7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Key responsibilities: Partner with business, product, and engineering teams to define problem statements, evaluate feasibility, and design AI/ML-driven solutions that deliver measurable business value. Lead and execute end-to-end AI/ML projects — from data exploration and model development to validation, deployment, and monitoring in production. Drive solution architecture using advanced techniques in machine learning, NLP, Generative AI, and statistical modeling. Champion the scalability, reproducibility, and sustainability of AI solutions by establishing best practices in model development, CI/CD, and performance tracking. Guide junior and associate AI/ML scientists through technical mentoring, code reviews, and solution reviews. Identify and evangelize the adoption of emerging tools, technologies, and methodologies across teams. Translate technical outputs into actionable insights for business stakeholders through storytelling, data visualizations, and stakeholder engagement. We are looking for: A seasoned AI/ML scientist with 7+ years of hands-on experience delivering enterprise-grade AI/ML solutions. Advanced proficiency in Python, SQL, PySpark, and experience working with cloud platforms (Azure preferred) and tools such as Databricks, Synapse, ADF, and Web Apps. Strong expertise in text analytics, NLP, and Generative AI, with real-world deployment exposure. Solid understanding of model evaluation, optimization, bias mitigation, and monitoring in production. A problem solver with scientific rigor, strong business acumen, and the ability to bridge the gap between data and decisions. Prior experience in leading cross-functional AI initiatives or collaborating with engineering teams to deploy ML pipelines. Bachelor's or master’s degree in computer science, Engineering, Statistics, or a related quantitative field. A PhD is a plus. Prior understanding in business domain of shipping and logistics is an advantage. Maersk is committed to a diverse and inclusive workplace, and we embrace different styles of thinking. Maersk is an equal opportunities employer and welcomes applicants without regard to race, colour, gender, sex, age, religion, creed, national origin, ancestry, citizenship, marital status, sexual orientation, physical or mental disability, medical condition, pregnancy or parental leave, veteran status, gender identity, genetic information, or any other characteristic protected by applicable law. We will consider qualified applicants with criminal histories in a manner consistent with all legal requirements. We are happy to support your need for any adjustments during the application and hiring process. If you need special assistance or an accommodation to use our website, apply for a position, or to perform a job, please contact us by emailing accommodationrequests@maersk.com.
Posted 1 week ago
4.0 - 7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Key responsibilities: Collaborate with business, platform and technology stakeholders to understand the scope of projects. Perform comprehensive exploratory data analysis at various levels of granularity of data to derive inferences for further solutioning/experimentation/evaluation. Design, develop and deploy robust enterprise AI solutions using Generative AI, NLP, machine learning, etc. Continuously focus on providing business value while ensuring technical sustainability. Promote and drive adoption of cutting-edge data science and AI practices within the team. Continuously stay up to date on relevant technologies and use this knowledge to push the team forward. We are looking for: A team player having 4-7 years of experience in the field of data science and AI. Proficiency with programming/querying languages like python, SQL, pyspark along with Azure cloud platform tools like databricks, ADF, synapse, web app, etc. An individual with strong work experience in areas of text analytics, NLP and Generative AI. A person with a scientific and analytical thinking mindset comfortable with brainstorming and ideation. A doer with deep interest in driving business outcomes through AI/ML. A candidate with bachelor’s or master’s degree in engineering, computer science with/withput a specialization within the field of AI/ML. A candidate with strong business acumen and desire to collaborate with business teams and help them by solving business problems. Prior understanding in business domain of shipping and logistics is an advantage. Maersk is committed to a diverse and inclusive workplace, and we embrace different styles of thinking. Maersk is an equal opportunities employer and welcomes applicants without regard to race, colour, gender, sex, age, religion, creed, national origin, ancestry, citizenship, marital status, sexual orientation, physical or mental disability, medical condition, pregnancy or parental leave, veteran status, gender identity, genetic information, or any other characteristic protected by applicable law. We will consider qualified applicants with criminal histories in a manner consistent with all legal requirements. We are happy to support your need for any adjustments during the application and hiring process. If you need special assistance or an accommodation to use our website, apply for a position, or to perform a job, please contact us by emailing accommodationrequests@maersk.com.
Posted 1 week ago
3.0 years
0 Lacs
Greater Kolkata Area
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities: · 3+ years of experience in implementing analytical solutions using Palantir Foundry. · · preferably in PySpark and hyperscaler platforms (cloud services like AWS, GCP and Azure) with focus on building data transformation pipelines at scale. · · Team management: Must have experience in mentoring and managing large teams (20 to 30 people) for complex engineering programs. Candidate should have experience in hiring and nurturing talent in Palantir Foundry. · · Training: candidate should have experience in creating training programs in Foundry and delivering the same in a hands-on format either offline or virtually. · · At least 3 years of hands-on experience of building and managing Ontologies on Palantir Foundry. · · At least 3 years of experience with Foundry services: · · Data Engineering with Contour and Fusion · · Dashboarding, and report development using Quiver (or Reports) · · Application development using Workshop. · · Exposure to Map and Vertex is a plus · · Palantir AIP experience will be a plus · · Hands-on experience in data engineering and building data pipelines (Code/No Code) for ELT/ETL data migration, data refinement and data quality checks on Palantir Foundry. · · Hands-on experience of managing data life cycle on at least one hyperscaler platform (AWS, GCP, Azure) using managed services or containerized deployments for data pipelines is necessary. · · Hands-on experience in working & building on Ontology (esp. demonstrable experience in building Semantic relationships). · · Proficiency in SQL, Python and PySpark. Demonstrable ability to write & optimize SQL and spark jobs. Some experience in Apache Kafka and Airflow is a prerequisite as well. · · Hands-on experience on DevOps on hyperscaler platforms and Palantir Foundry is necessary. · · Experience in MLOps is a plus. · · Experience in developing and managing scalable architecture & working experience in managing large data sets. · · Opensource contributions (or own repositories highlighting work) on GitHub or Kaggle is a plus. · · Experience with Graph data and graph analysis libraries (like Spark GraphX, Python NetworkX etc.) is a plus. · · A Palantir Foundry Certification (Solution Architect, Data Engineer) is a plus. Certificate should be valid at the time of Interview. · · Experience in developing GenAI application is a plus Mandatory skill sets: · At least 3 years of hands-on experience of building and managing Ontologies on Palantir Foundry. · At least 3 years of experience with Foundry services Preferred skill sets: Palantir Foundry Years of experience required: Experience 4 to 7 years ( 3 + years relevant) Education qualification: Bachelor's degree in computer science, data science or any other Engineering discipline. Master’s degree is a plus. Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Science Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Palantir (Software) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis {+ 16 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 1 week ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Summary Position Summary Technical Lead – Big Data & Python skillset As a Technical Lead, you will be responsible as a strong full stack developer and individual contributor responsible to design application modules and deliver from the technical standpoint. High level of skills in coming up with high level design working with the architect and lead in module implementations technically. Must be a strong developer and ability to innovative. Should be a go to person on the assigned modules, applications/ projects and initiatives. Maintains appropriate certifications and applies respective skills on project engagements. Work you’ll do A unique opportunity to be a part of growing Delivery, methods & Tools team that drives consistency, quality, and efficiency of the services delivered to stakeholders. Responsibilities: Full stack hands on developer and strong individual contributor. Go-to person on the assigned projects. Able to understand and implement the project as per the proposed Architecture. Implements best Design Principles and Patterns. Understands and implements the security aspects of the application. Knows ADO and is familiar with using ADO. Obtains/maintains appropriate certifications and applies respective skills on project engagements. Leads or contributes significantly to Practice. Estimates and prioritizes Product Backlogs. Defines work items. Works on unit test automation. Recommend improvements to existing software programs as deemed necessary. Go-to person in the team for any technical issues. Conduct Peer Reviews Conducts Tech sessions within Team. Provides input to standards and guidelines. Implements best practices to enable consistency across all projects. Participate in the continuous improvement processes, as assigned. Mentors and coaches Juniors in the Team. Contributes to POCs. Supports the QA team with clarifications/ doubts. Takes ownership of the deployment, Tollgate, and deployment activities. Oversees the development of documentation. Participates in regular work, status communications and stakeholder updates. Supports development of intellectual capital. Contributes to knowledge network. Acts as a technical escalation point. Conducts sprint review. Does code Optimization and suggests team on the best practices. Skills: Education qualification : BE /B Tech ( IT/CS/Electronics) / MCA / MSc Computer science 6-9years ofIT experience in application development , support or maintenance activities 2+ years of experience in team management. Must have in-depth knowledge of software development lifecycles including agile development and testing. Enterprise Data Management framework , data security & Compliance( optional ). Data Ingestion, Storage n Transformation Data Auditing n Validation ( optional ) Data Visualization with Power BI ( optional ) Data Analytics systems ( optional ) Scaling and Handling large data sets. Designing & Building Data Services using At least 2+ years’ in : Azure SQL DB , SQL Wearhouse, ADF , Azure Storage, ADO CI/CD, Azure Synapse Data Model Design Data Entities : modeling and depiction. Metadata Mgmt( optional ). Database development patterns n practices : SQL / NoSQL ( Relation / Non-Relational – native JSON) , flexi schema, indexing practices, Master / child model data mgmt, Columnar , Row API / SDK for No SQL DBs Ops & Mgmt. Design and Implementation of Data warehouse, Azure Synapse, Data Lake, Delta lake Apace Spark Mgmt Programming Languages PySpark / Python , C#( optional ) API : Invoke / Request n Response PowerShell with Azure CLI ( optional ) Git with ADO Repo Mgmt, Branching Strategies Version control Mgmt Rebasing, filtering , cloning , merging Debugging & Perf Tuning n Optimization skills : Ability to analyze PySpark code, PL/SQL, . Enhancing response times GC Mgmt Debugging and Logging n Alerting techniques. Prior experience that demonstrates good business understanding is needed (experience in a professional services organization is a plus). Excellent written and verbal communications, organization, analytical, planning and leadership skills. Strong management, communication, technical and remote collaboration skill are a must. Experience in dealing with multiple projects and cross-functional teams, and ability to coordinate across teams in a large matrix organization environment. Ability to effectively conduct technical discussions directly with Project/Product management, and clients. Excellent team collaboration skills. Education & Experience: Education qualification: BE /B Tech ( IT/CS/Electronics) / MCA / MSc Computer science 6-9 years of Domain experience or other relevant industry experience. 2+ years of Product owner or Business Analyst or System Analysis experience. Minimum 3+ years of Software development experience in .NET projects. 3+ years of experiencing in Agile / scrum methodology Work timings: 9am-4pm, 7pm- 9pm Location: Hyderabad Experience: 6-9 yrs The team At Deloitte, Shared Services center improves overall efficiency and control while giving every business unit access to the company’s best and brightest resources. It is also lets business units focus on what really matters – satisfying customers and developing new products and services to sustain competitive advantage. A shared services center is a simple concept, but making it work is anything but easy. It involves consolidating and standardizing a wildly diverse collection of systems, processes, and functions. And if requires a high degree of cooperation among business units that generally are not accustomed to working together – with people who do not necessarily want to change. USI shared services team provides a wide array of services to the U.S. and it is constantly evaluating and expanding its portfolio. The shared services team provides call center support, Document Services support, financial processing and analysis support, Record management support, Ethics and compliance support and admin assistant support. How You’ll Grow At Deloitte, we’ve invested a great deal to create a rich environment in which our professionals can grow. We want all our people to develop in their own way, playing to their own strengths as they hone their leadership skills. And, as a part of our efforts, we provide our professionals with a variety of learning and networking opportunities—including exposure to leaders, sponsors, coaches, and challenging assignments—to help accelerate their careers along the way. No two people learn in exactly the same way. So, we provide a range of resources including live classrooms, team-based learning, and eLearning. DU: The Leadership Center in India, our state-of-the-art, world-class learning Center in the Hyderabad offices is an extension of the Deloitte University (DU) in Westlake, Texas, and represents a tangible symbol of our commitment to our people’s growth and development. Explore DU: The Leadership Center in India Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Deloitte’s culture Our positive and supportive culture encourages our people to do their best work every day. We celebrate individuals by recognizing their uniqueness and offering them the flexibility to make daily choices that can help them to be healthy, centered, confident, and aware. We offer well-being programs and are continuously looking for new ways to maintain a culture that is inclusive, invites authenticity, leverages our diversity, and where our people excel and lead healthy, happy lives. Learn more about Life at Deloitte. Corporate citizenship Deloitte is led by a purpose: to make an impact that matters. This purpose defines who we are and extends to relationships with our clients, our people and our communities.We believe that business has the power to inspire and transform. We focus on education, giving, skill-based volunteerism, and leadership to help drive positive social impact in our communities. Learn more about Deloitte’s impact on the world. #CAP-PD Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 300914
Posted 1 week ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. At YASH, we’re a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth – bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future. We are looking forward to hire AWS Professionals in the following areas : AWS Data Engineer JD As Below Primary skillsets :AWS services including Glue, Pyspark, SQL, Databricks, Python Secondary skillset- Any ETL Tool, Github, DevOPs(CI-CD) Experience: 3-4yrs Degree in computer science, engineering, or similar fields Mandatory Skill Set: Python, PySpark , SQL, AWS with Designing , developing, testing and supporting data pipelines and applications. 3+ years working experience in data integration and pipeline development. 3+ years of Experience with AWS Cloud on data integration with a mix of Apache Spark, Glue, Kafka, Kinesis, and Lambda in S3 Redshift, RDS, MongoDB/DynamoDB ecosystems Databricks, Redshift experience is a major plus. 3+ years of experience using SQL in related development of data warehouse projects/applications (Oracle & amp; SQL Server) Strong real-life experience in python development especially in PySpark in AWS Cloud environment Strong SQL and NoSQL databases like MySQL, Postgres, DynamoDB, Elasticsearch Workflow management tools like Airflow AWS cloud services: RDS, AWS Lambda, AWS Glue, AWS Athena, EMR (equivalent tools in the GCP stack will also suffice) Good to Have : Snowflake, Palantir Foundry At YASH, you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment. We leverage career-oriented skilling models and optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our Hyperlearning workplace is grounded upon four principles Flexible work arrangements, Free spirit, and emotional positivity Agile self-determination, trust, transparency, and open collaboration All Support needed for the realization of business goals, Stable employment with a great atmosphere and ethical corporate culture
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France