Home
Jobs
Companies
Resume

3719 Scala Jobs - Page 50

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Lucknow, Uttar Pradesh, India

Remote

Linkedin logo

About Agoda Agoda is an online travel booking platform for accommodations, flights, and more. We build and deploy cutting-edge technology that connects travelers with a global network of 4.7M hotels and holiday properties worldwide, plus flights, activities, and more . Based in Asia and part of Booking Holdings, our 7,100+ employees representing 95+ nationalities in 27 markets foster a work environment rich in diversity, creativity, and collaboration. We innovate through a culture of experimentation and ownership, enhancing the ability for our customers to experience the world. Our Purpose – Bridging the World Through Travel We believe travel allows people to enjoy, learn and experience more of the amazing world we live in. It brings individuals and cultures closer together, fostering empathy, understanding and happiness. We are a skillful, driven and diverse team from across the globe, united by a passion to make an impact. Harnessing our innovative technologies and strong partnerships, we aim to make travel easy and rewarding for everyone. LLM Platform at Agoda Agoda’s LLMOps Platform is at the forefront of enabling scalable, secure, and efficient deployment of Large Language Models (LLMs) and generative AI solutions. Built on robust cloud and on-premises infrastructure, our platform empowers teams to experiment, deploy, and monitor LLM-powered applications with confidence and agility. Our LLMOps team bridges the gap between advanced AI research and real-world production systems, ensuring that LLMs are delivered reliably, responsibly, and at scale. We focus on best practices in model lifecycle management, data privacy, prompt engineering, and continuous improvement to help our users unlock the full potential of generative AI. In This Role, You’ll Get to: Lead the design, development, and implementation of LLMOps solutions for deploying, monitoring, and managing large language models and generative AI systems Collaborate with data scientists, ML engineers, and product teams to build scalable pipelines for LLM fine-tuning, evaluation, and inference Develop and maintain tools for prompt management, versioning, and automated evaluation of LLM outputs Ensure responsible AI practices by integrating data anonymization, bias detection, and compliance monitoring into LLM workflows Develop and maintain monitoring and management tools to ensure the reliability and performance of our on-premises and cloud machine learning platform Work with stakeholders across the organization to understand their generative AI needs and deliver LLMOps solutions that drive business impact Stay up-to-date with the latest trends and technologies in LLMOps, generative AI, and responsible AI, and share your knowledge to keep the team at the cutting edge Mentor junior team members and help them grow their skills and expertise in LLMOps Troubleshoot and resolve issues related to LLM deployment, scaling, and performance What You’ll Need to Succeed: 5+ years of experience in LLMOps, MLOps, Software Engineering, or a related field Strong programming skills in a modern language (Python, Kotlin, Scala, Java, etc.) Excellent communication and collaboration skills, with the ability to work effectively in cross-functional teams Commitment to code quality, simplicity, and performance It’s Great if you have: Experience with LLMOps platforms and tools (e.g., Hugging Face, LangChain, Ray, MLFlow, Kubeflow) Familiarity with prompt engineering, retrieval-augmented generation (RAG), and LLM evaluation frameworks Experience with data privacy, PII anonymization, and responsible AI practices Strong knowledge of containerization and orchestration (Docker, Kubernetes) Experience with DevOps, CI/CD, and scalable API development for LLM serving Passion for the engineering challenges of generative AI and scaling LLM solutions Experience designing and building LLMOps infrastructure, including data pipelines, model management, and monitoring tools Deep understanding of LLM architectures (e.g., GPT, Llama) and experience with model fine-tuning, evaluation, and deployment Benefits Flexible hybrid work arrangement with the option to work remotely for part of the year Generous annual leave, sick leave, and public holidays Exclusive accommodation discounts for personal travel Annual allowance for wellness, learning, fitness, and travel experiences Opportunities for career growth through training, certifications, and internal promotions Competitive compensation and comprehensive health benefits Relocation Package (for employee and family): Full visa sponsorship for employees, spouse, and children Support for airfare, travel insurance, and temporary accommodation upon arrival Assistance with moving household goods and pet relocation #newdelhi #Pune #Hyderabad #Bangalore #Mumbai #Bengaluru #Chennai #Kolkata #Lucknow #sanfrancisco #sanjose #losangeles #sandiego #oakland #denver #miami #orlando #atlanta #chicago #boston #detroit #newyork #portland #philadelphia #dallas #houston #austin #seattle #sydney #melbourne #perth #toronto #vancouver #montreal #shanghai #beijing #shenzhen #prague #Brno #Ostrava #cairo #alexandria #giza #estonia #paris #berlin #munich #hamburg #stuttgart #cologne #frankfurt #hongkong #budapest #jakarta #bali #dublin #telaviv #milan #rome #tokyo #osaka #kualalumpur #amsterdam #oslo #manila #warsaw #krakow #bucharest #moscow #saintpetersburg #capetown #johannesburg #seoul #barcelona #madrid #stockholm #zurich #taipei #bangkok #Phuket #istanbul #london #manchester #edinburgh #kiev #hcmc #hanoi #wroclaw #poznan #katowice #rio #salvador #IT #4 #5 Equal Opportunity Employer At Agoda, we pride ourselves on being a company represented by people of all different backgrounds and orientations. We prioritize attracting diverse talent and cultivating an inclusive environment that encourages collaboration and innovation. Employment at Agoda is based solely on a person’s merit and qualifications. We are committed to providing equal employment opportunity regardless of sex, age, race, color, national origin, religion, marital status, pregnancy, sexual orientation, gender identity, disability, citizenship, veteran or military status, and other legally protected characteristics. We will keep your application on file so that we can consider you for future vacancies and you can always ask to have your details removed from the file. For more details please read our privacy policy . Disclaimer We do not accept any terms or conditions, nor do we recognize any agency’s representation of a candidate, from unsolicited third-party or agency submissions. If we receive unsolicited or speculative CVs, we reserve the right to contact and hire the candidate directly without any obligation to pay a recruitment fee. Show more Show less

Posted 1 week ago

Apply

0.0 - 2.0 years

8 - 9 Lacs

Hyderabad

Work from Office

Naukri logo

Do you have the technical prowess to build data solutions that process billions of rows a day using AWS technologies? Are you excited about creating real-time and batch analytics platforms that drive business decisions? Do you thrive on solving complex data challenges and implementing big data architectures? Were seeking a talented Data Engineer who is passionate about working with large-scale data analytics solutions and cloud technologies. First things first, you must know SQL and data modelling like the back of your hand. You need to know Big Data and MPP systems. You have a history of coming up with innovative solutions to complex technical problems. You are a quick and willing learner of new technologies. You are not tool-centric; You determine what technology works best for the problem at hand and apply it accordingly. You will work with top-notch technical professionals developing complex systems at scale and with a focus on sustained operational excellence. We are looking for people who are motivated by thinking big, moving fast, and exploring business insights. If you love to implement solutions to hard problems while working hard, having fun, and making history, this may be the opportunity for you. - 1+ years of data engineering experience - Experience with SQL - Experience with data modeling, warehousing and building ETL pipelines - Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) - Experience with one or more scripting language (e.g., Python, KornShell) - Experience with big data technologies such as: Hadoop, Hive, Spark, EMR

Posted 1 week ago

Apply

8.0 - 13.0 years

50 - 55 Lacs

Hyderabad

Work from Office

Naukri logo

Do you pioneer? Do you enjoy solving complex problems in building and analyzing large datasets? Do you enjoy focusing first on your customer and working backwards? Amazon transportation controllership team is looking for an experienced Data Engineering Manager with experience in architecting large/complex data systems with a strong record of achieving results, scoping and delivering large projects end-to-end. You will be the key driver in building out our vision for scalable data systems to support the ever-growing Amazon global transportation network businesses. As a Data Engineering Manager in Transportation Controllership, you will be at the forefront of managing large projects, providing vision to the team, designing and planning large financial data systems that will allow our businesses to scale world-wide. You should have deep expertise in the database design, management, and business use of extremely large datasets, including using AWS technologies - Redshift, S3, EC2, Data-pipeline and other big data technologies. Above all you should be passionate about data warehousing large datasets together to answer business questions and drive change. You should have excellent business acumen and communication skills to be able to work with multiple business teams, and be comfortable communicating with senior leadership. Due to the breadth of the areas of business, you will coordinate across many internal and external teams, and provide visibility to the senior leaders of the company with your strong written and oral communication skills. We need individuals with demonstrated ability to learn quickly, think big, execute both strategically and tactically, motivate and mentor their team to deliver business values to our customers on time. A day in the life On a daily basis you will: manage and help GROW a team of high performing engineers understand new business requirements and architect data engineering solutions for the same plan your teams priorities, working with relevant internal/external stakeholders, including sprint planning resolve impediments faced by the team update leadership as needed use judgement in making the right tactical and strategic decisions for the team and organization monitor health of the databases and ingestion pipelines - 2+ years of processing data with a massively parallel technology (such as Redshift, Teradata, Netezza, Spark or Hadoop based big data solution) experience - 2+ years of relational database technology (such as Redshift, Oracle, MySQL or MS SQL) experience - 2+ years of developing and operating large-scale data structures for business intelligence analytics (using ETL/ELT processes) experience - 5+ years of data engineering experience - Experience managing a data or BI team - Experience communicating to senior management and customers verbally and in writing - Experience leading and influencing the data or BI strategy of your team or organization - Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS - Experience with big data technologies such as: Hadoop, Hive, Spark, EMR - Experience with AWS Tools and Technologies (Redshift, S3, EC2)

Posted 1 week ago

Apply

1.0 - 8.0 years

25 - 27 Lacs

Hyderabad

Work from Office

Naukri logo

The Data Engineer will own the data infrastructure for the Reverse Logistics Team which includes collaboration with software development teams to build the data infrastructure and maintain a highly scalable, reliable and efficient data system to support the fast growing business. You will work with analytic tools, can write excellent SQL scripts, optimize performance of SQL queries and can partner with internal customers to answer key business questions. We look for candidates who are self-motivated, flexible, hardworking and who like to have fun. About the team Reverse Logistics team at Amazon Hyderabad Development Center is an agile team whose charter is to deliver the next generation of Reverse Logistics platform. As a member of this team, your mission will be to design, develop, document and support massively scalable, distributed data warehousing, querying and reporting system. - 2+ years of data engineering experience - Experience with data modeling, warehousing and building ETL pipelines - Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) - Experience with one or more scripting language (e.g., Python, KornShell) - Knowledge of AWS Infrastructure - Knowledge of writing and optimizing SQL queries in a business environment with large-scale, complex datasets - Strong analytical and problem solving skills. Curious, self-motivated & a self-starter with a can do attitude . Comfortable working in fast paced dynamic environment - Bachelors degree in a quantitative/technical field such as computer science, engineering, statistics - Proven track record of strong interpersonal and communication (verbal and written) skills. - Experience developing insights across various areas of customer-related data: financial, product, and marketing - Proven problem solving skills, attention to detail, and exceptional organizational skills - Ability to deal with ambiguity and competing objectives in a fast paced environment - Knowledge of software engineering best practices across the development lifecycle, including agile methodologies, coding standards, code reviews, source management, build processes, testing and operations

Posted 1 week ago

Apply

5.0 - 10.0 years

4 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

Were seeking a Senior Software Engineer or a Lead Software Engineer to join one of our Data Layer teams. As the name implies, the Data Layer is at the core of all things data at Zeta. Our responsibilities include: Developing and maintaining the Zeta Identity Graph platform, which collects billions of behavioural, demographic, locations and transactional signals to power people-based marketing. Ingesting vast amounts of identity and event data from our customers and partners. Facilitating data transfers across systems. Ensuring the integrity and health of our datasets. And much more. As a member of this team, the data engineer will be responsible for designing and expanding our existing data infrastructure, enabling easy access to data, supporting complex data analyses, and automating optimization workflows for business and marketing operations. Essential Responsibilities: As a Senior Software Engineer or a Lead Software Engineer, your responsibilities will include: Building, refining, tuning, and maintaining our real-time and batch data infrastructure Daily use technologies such as Spark, Airflow, Snowflake, Hive, Scylla, Django, FastAPI, etc. Maintaining data quality and accuracy across production data systems Working with Data Engineers to optimize data models and workflows Working with Data Analysts to develop ETL processes for analysis and reporting Working with Product Managers to design and build data products Working with our DevOps team to scale and optimize our data infrastructure Participate in architecture discussions, influence the road map, take ownership and responsibility over new projects Participating in on-call rotation in their respective time zones (be available by phone or email in case something goes wrong) Desired Characteristics: Minimum 5-10 years of software engineering experience. Proven long term experience and enthusiasm for distributed data processing at scale, eagerness to learn new things. Expertise in designing and architecting distributed low latency and scalable solutions in either cloud and on-premises environment. Exposure to the whole software development lifecycle from inception to production and monitoring Fluency in Python or solid experience in Scala, Java Proficient with relational databases and Advanced SQL Expert in usage of services like Spark and Hive Experience with web frameworks such as Flask, Django Experience in adequate usage of any scheduler such as Apache Airflow, Apache Luigi, Chronos etc. Experience in Kafka or any other stream message processing solutions. Experience in adequate usage of cloud services (AWS) at scale Experience in agile software development processes Excellent interpersonal and communication skills Nice to have: Experience with large scale / multi-tenant distributed systems Experience with columnar / NoSQL databases Vertica, Snowflake, HBase, Scylla, Couchbase Experience in real team streaming frameworks Flink, Storm Experience in open table formats such as Iceberg, Hudi or Deltalake

Posted 1 week ago

Apply

5.0 - 8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

The Role The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology/MCA 5 to 8 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills T echnical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Big Data: Experience of ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management: Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Technical Skills (Valuable) Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others: Basics of Job scheduler like Autosys. Basics of Entitlement management Certification on any of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: ------------------------------------------------------ Citi is an equal opportunity and affirmative action employer. Qualified applicants will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Citigroup Inc. and its subsidiaries ("Citi”) invite all qualified interested applicants to apply for career opportunities. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi . View the " EEO is the Law " poster. View the EEO is the Law Supplement . View the EEO Policy Statement . View the Pay Transparency Posting Show more Show less

Posted 1 week ago

Apply

3.0 - 4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

The Role The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology/MCA 3 to 4 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills T echnical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in at least one of the data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Big Data: Exposure to ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management: Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Technical Skills (Valuable) Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others: Basics of Job scheduler like Autosys. Basics of Entitlement management Certification on any of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity and affirmative action employer. Qualified applicants will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Citigroup Inc. and its subsidiaries ("Citi”) invite all qualified interested applicants to apply for career opportunities. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi . View the " EEO is the Law " poster. View the EEO is the Law Supplement . View the EEO Policy Statement . View the Pay Transparency Posting Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Description Who We Are At Goldman Sachs, we connect people, capital and ideas to help solve problems for our clients. We are a leading global financial services firm providing investment banking, securities and investment management services to a substantial and diversified client base that includes corporations, financial institutions, governments and individuals. Goldman Sachs Electronic Trading (GSET) continues to innovate to remain the top provider in electronic trading by building superior technology and delivering high quality products. This is a multi-year investment in people, platforms and products. Join the team, and participate in the development and launch of best in class products for top clients across the industry. We are looking for eager, nimble and ambitious engineers to join our growing team of visionaries, and drive Goldman Sachs Electronic Trading to achieve and exceed our goals. Your Impact This team is accountable for maintaining the Client Data Platform for different Global Markets business lines. As stewards of critical components in order execution and post-trade, the team is accountable for a high degree of software quality. The team consists of self-guided pragmatic individuals who are motivated to change the status quo in calculated ways. As a member of the team, you will play an integral role on the trading floor. This is a dynamic, entrepreneurial team with a passion for technology and the markets, and suits individuals who thrive in a fast-paced environment. The team takes a data driven approach to decision making and you should be willing to participate in the full product lifecycle from requirements gathering, design, implementation, testing, support, and monitoring trading performance for systems and strategies used by our clients. Responsibilities Design, build and maintain a high-performance, high-availability and adaptive platform for handling client data Develop a highly reliable data management platform User Interface with complex visualization, latency and security requirements Collaborate across multiple regions and businesses for requirements gathering, solution design & execution Engage with trading desk users, other application owners and compliance officers on new feature requests and product evolution Basic Qualifications Bachelor’s or Master’s degree in computer science or engineering or equivalent experience 3+ years of professional experience in Front End design/development Proficient in developing user interface and implementing them with React.js workflows (such as Flux or Redux) Proficient in JavaScript and able to write well-documented, clean code Ability to drive and take UI design decisions Strong communication skills and the ability to work in a team Preferred Qualifications Thorough knowledge of Java programming concepts Strong knowledge of object oriented programming, data structures, algorithms and design patterns Good understanding of Python/Scala programming language Experience in the Financial Services industry Goldman Sachs Engineering Culture At Goldman Sachs, our Engineers don’t just make things – we make things possible. Change the world by connecting people and capital with ideas. Solve the most challenging and pressing engineering problems for our clients. Join our engineering teams that build massively scalable software and systems, architect low latency infrastructure solutions, proactively guard against cyber threats, and leverage machine learning alongside financial engineering to continuously turn data into action. Create new businesses, transform finance, and explore a world of opportunity at the speed of markets. Engineering is at the critical center of our business, and our dynamic environment requires innovative strategic thinking and immediate, real solutions. Want to push the limit of digital possibilities? Start here! © The Goldman Sachs Group, Inc., 2025. All rights reserved. Goldman Sachs is an equal employment/affirmative action employer Female/Minority/Disability/Veteran/Sexual Orientation/Gender Identity. Show more Show less

Posted 1 week ago

Apply

16.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Are you ready for the best destination of your career? Spotnana is transforming the $11 trillion travel industry by building modern infrastructure that brings freedom, simplicity, and trust to travelers worldwide. Backed by over $115M in funding from top-tier investors, including ICONIQ, Durable, Mubadala, Madrona, and Sandberg Bernthal Ventures, we are addressing some of the travel industry's most complex challenges—and we need your expertise to help us succeed. How you’ll make an impact We are looking for a hands-on person, leading a team of top talent engineers in the design, development, test, and deployment of Spotnana.You are a leader with progressive technical experience, a demonstrated progression of management scope, and a passion for managing engineering talent in a fast-paced environment. You stay current with your technical skills, and possess exceptional project management and communication skills. What you'll own Own & drive the engineering/technical strategy for the team, deliver business critical initiatives in specified timeline and actively contribute towards product roadmap to drive the maximum impact for Spotnana by defining the team’s vision, mission, & strategy. Identify broader problems (Product features, End User Experience, Technology & Business), provide multi quarter roadmaps to the team, embrace constraints and prioritize effectively by working with stakeholders & teams. Collaborate & Partner with peers in Product, Design & Engineering to craft the best building blocks, guidelines, & standards, & evolve the team’s vision, mission & strategy. Engage in architectural and coding discussions across multiple teams, influence the roadmap, and take ownership of key projects and initiatives. Be well versed in S/W architecture and design. Willingness to be a part of design reviews (& code reviews) and ensure right decisions are made across the development lifecycle (Both front end and back end). Raise the bar for engineering standards, tooling and processes. Work very closely with different product and business stakeholders at various locations in US and India to drive the execution of multiple business plans and technologies Improve, optimize and identify opportunities for efficient software development processes Hire, Develop and Retain a strong team of software engineers. Exhibit strong leadership and communication skills to collaborate with product, engineering and management teams across different geographic locations Promote and support company policies, procedures, mission, values, and standards of ethics and integrity. Lead and direct large-scale, complex, cross-functional projects by reviewing project requirements; translating requirements into technical solutions; directing and reviewing Design artifacts (for example, proof of concepts, prototypes); write and develop code; oversee software design; review unit test cases; communicate status and issues to team members and stakeholders; direct project team and cross functional teams; enhance design to prevent recurrences of defects; ensure on-time delivery and hand-offs; interact with project manager to provide input on project plan; and provide leadership to the project team. Partner with cross-functional teams to define problem statements, and goals and build analytical models and measurement frameworks to assess progress and make recommendations Maximize efficiency in a constantly evolving environment where the process is fluid and creative solutions are the norm Serve as a technical thought leader and champion Experience you'll bring with you 16+ years of overall experience with 12+ years of hands-on software engineering experience in planning, architecting, and delivering high-quality products. Proficiency in system design. Good understanding of scalability and how various distributed systems work. Deep expertise in Architecture and Design Patterns, Cloud Native Microservices Development, Developing resilient, reliable & quality software. Very strong understanding and experience in the area of software development lifecycle. Technical mindset and product development driven experience and ability to deep dive on technical issues and provide guidance to the team. Deep knowledge of designing and implementing real-time pipelines Experience in large implementations of Cloud Platforms - Hadoop, BigQuery, AWS, Snowflake Advanced SQL, Python and knowledge of Java, Scala, C/C++ or comparable programming language A thoughtful and proven approach to building and growing high efficient engineering teams Strong ability to work with a variety of cloud data tools and technologies and recommend best tools for various needs Demonstrated ability to develop sizing & capacity planning for large scale cloud big data platforms and optimizing performance and platform throughout Advanced analytical and problem-solving skills, with familiarity/comfortability, using mixed methods and/or types of data (i.e. qualitative, quantitative, cross-sectional, longitudinal) Be a self-starter, motivated by an interest in developing the best possible data-driven solutions Ability to operate in a fast paced, dynamic environment supporting rapidly growing business Let’s talk compensation Spotnana strives to offer fair, industry-competitive and equitable compensation. Our approach holistically assesses total compensation, including cash, company equity and comprehensive benefits. Our market-based compensation approach uses data from trusted third party compensation sources to set salary ranges that are thoughtful and consistent with the role, industry, company size, and internal equity of our team. Each employee is paid within the minimum and maximum of their position’s compensation range based on their skills, experience, qualifications, and other job-related specifications. We care for the people who make everything possible - our benefits offerings include: Equity in the form of stock options which provides partial ownership in the company so you can share in the success of the company as it grows Comprehensive benefit plans covering medical for self, spouse, children and parents, Free doctor consultations, Employee assistance program effective on your hire date. 18 Privilege leaves, 12 casual/sick leave days per year in additional to 12 company holidays, 4 company recharge/wellness days and an end of year company shutdown Up to 26 weeks of Parental Leave Monthly cell phone / internet stipend Meal allowance Wellness/Gym Reimbursement Relocation assistance to new joiners Employee retirement planning such as corporate NPS and EPF We are committed to fostering a diverse, inclusive environment and to encourage these values in everyone on our team. We provide an environment of mutual respect where opportunities are available without regard to race, color, religion, sex, pregnancy (including childbirth, lactation and related medical conditions), national origin, age, physical and mental disability, marital status, sexual orientation, gender identity, gender expression, genetic information (including characteristics and testing), military and veteran status, and any other characteristic protected by applicable law. We believe that diversity and inclusion for people from all walks of life is key to our success as a company. We are committed to fostering a diverse, inclusive environment and to encourage these values in everyone on our team. We provide an environment of mutual respect where opportunities are available without regard to race, color, religion, sex, pregnancy (including childbirth, lactation and related medical conditions), national origin, age, physical and mental disability, marital status, sexual orientation, gender identity, gender expression, genetic information (including characteristics and testing), military and veteran status, and any other characteristic protected by applicable law. We believe that diversity and inclusion for people from all walks of life is key to our success as a company. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Dreaming big is in our DNA. It’s who we are as a company. It’s our culture. It’s our heritage. And more than ever, it’s our future. A future where we’re always looking forward. Always serving up new ways to meet life’s moments. A future where we keep dreaming bigger. We look for people with passion, talent, and curiosity, and provide them with the teammates, resources and opportunities to unleash their full potential. The power we create together – when we combine your strengths with ours – is unstoppable. Are you ready to join a team that dreams as big as you do? AB InBev GCC was incorporated in 2014 as a strategic partner for Anheuser-Busch InBev. The center leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology. The teams are transforming Operations through Tech and Analytics. Do You Dream Big? We Need You. Job Description Job Title: Data Scientist – GBS Commercial Location: Bangalore Reporting to: Senior Manager – GBS Commercial Purpose of the role We are looking for a Data Scientist to analyze large amounts of raw information to find patterns that will help improve our company. We will rely on you to build data products to extract valuable business insights. In this role, you should be highly analytical with a knack for analysis, math and statistics. Critical thinking and problem-solving skills are essential for interpreting data. We also want to see a passion for machine-learning and research. Key tasks & accountabilities Identify valuable data sources and automate collection processes Undertake preprocessing of structured and unstructured data Analyze large amounts of information to discover trends and patterns Build predictive models and machine-learning algorithms Combine models through ensemble modeling Present information using data visualization techniques Propose solutions and strategies to business challenges Collaborate with engineering and product development teams 3. Qualifications, Experience, Skills Level of educational attainment required: BSc/BA in Computer Science, Engineering or relevant field; graduate degree in Data Science or other quantitative field is preferred Previous work experience required: Proven experience as a Data Scientist or Data Analyst Experience in data mining Understanding of machine-learning and operations research Technical skills required: Knowledge of R, SQL and Python; familiarity with Scala, Java or C++ is an asset Experience using business intelligence tools (e.g. PowerBI) and data frameworks (e.g. Hadoop) Analytical mind and business acumen Strong math skills (e.g. statistics, algebra) And above all of this, an undying love for beer! We dream big to create future with more cheers. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Description At Amazon, we strive to be the most innovative and customer centric company on the planet. Come work with us to develop innovative products, tools and research driven solutions in a fast-paced environment by collaborating with smart and passionate leaders, program managers and software developers. This role is based out of our Hyderabad corporate office and is for an passionate, dynamic, analytical, innovative, hands-on, and customer-obsessed Business Analyst. Your team will be comprised of Business Analysts, Data Engineers, Business Intelligence Engineers based in Hyderabad, Europe and the US. Key job responsibilities The ideal candidate will have experience working with large datasets and distributed computing technologies. The candidate relishes working with large volumes of data, enjoys the challenge of highly complex technical contexts, and, is passionate about data and analytics. He/she should be an expert with data modeling, ETL design and business intelligence tools, has hand-on knowledge on columnar databases such as Redshift and other related AWS technologies. He/she passionately partners with the customers to identify strategic opportunities in the field of data analysis & engineering. He/she should be a self-starter, comfortable with ambiguity, able to think big (while paying careful attention to detail) and enjoys working in a fast-paced team that continuously learns and evolves on a day to day basis. A day in the life Key Job Responsibilities This role primarily focuses on deep-dives, creating dashboards for the business, working with different teams to develop and track metrics and bridges. Design, develop and maintain scalable, automated, user-friendly systems, reports, dashboards, etc. that will support our analytical and business needs In-depth research of drivers of the Localization business Analyze key metrics to uncover trends and root causes of issues Suggest and build new metrics and analysis that enable better perspective on business Capture the right metrics to influence stakeholders and measure success Develop domain expertise and apply to operational problems to find solution Work across teams with different stakeholders to prioritize and deliver data and reporting Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation Maintain BI architecture including our AWS account, database and various analytics tools. Basic Qualifications SQL mastery is a must Some scripting knowledge (python, R, scala) Stakeholder management Dashboarding (Excel, Quicksight, Power BI) Data analysis and statistics KPI design Preferred Qualifications Power BI and Power Pivot in Excel AWS fundamentals (IAM, S3, ..) Python Apache Spark / Scala Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2942382 Show more Show less

Posted 1 week ago

Apply

4.0 - 7.0 years

14 - 18 Lacs

Hyderabad

Hybrid

Naukri logo

We are seeking a skilled Java/Scala Developer to join our team and contribute to projects involving large-scale data processing and modern application development. You will be involved in building and maintaining robust applications, particularly within the healthcare and financial services domains. This role requires a strong foundation in both Java and Scala, with experience in big data technologies and cloud platforms. Responsibilities: *Design, develop, and unit test high-quality Java and Scala code. *Develop and maintain applications utilizing Java, Scala, and related frameworks. *Work with big data technologies like Spark and Kafka. *Implement data processing pipelines using technologies like SSIS and Snowflake. *Utilize Splunk Logs for logging and monitoring. *Deploy and manage applications on AWS cloud infrastructure. *Collaborate with cross-functional teams to deliver innovative solutions. *Participate in code reviews and contribute to improving our development processes. *Contribute to the development of shared services within a microservices architecture. Preferred Qualifications: - Bachelors degree in Computer Science or related field. - Prior experience in developing and maintaining scalable backend solutions. - Familiarity with continuous integration and continuous deployment (CI/CD) pipelines. - Excellent problem-solving skills and attention to detail. - Strong communication and collaboration skills

Posted 1 week ago

Apply

7.0 - 14.0 years

10 - 15 Lacs

Bengaluru

Work from Office

Naukri logo

Strategy Analyse business problems and help to arrive at technically advanced solutions Proven ability to think out-of-the-box, fostering innovation and automation Proven ability that establish a strong team-player approach to problem solving Strong foundational knowledge of Algorithms, Data Structures, OOPs concepts and frameworks Curious learner, willing to learn and adapt to new technologies and frameworks Empowered mindset with ability to ask questions and seek clarifications Excellent communication skills that enable seamless interactions with colleagues globally Strong technical skills, with exposure to coding in any next-gen tech Awareness of Agile methodologies Good technical skills, with exposure to o An object-oriented programming, preferably Java, o Modern technologies like Microservices, UI frameworks -Angular, React o Applied Maths and algorithm o AI/NLP/Machine Learning algorithms Business Trade | Risk | Money Laundering People Talent Lead a team of Developers/Senior Developers and guide them in various activities like Development, testing and Testing Support and Implementation and Post Implementation support. Risk Management Pro-actively manage risk and keep stakeholder informed Key Responsibilities Regulatory Business Conduct Display exemplary conduct and live by the Group s Values and Code of Conduct. Take personal responsibility for embedding the highest standards of ethics, including regulatory and business conduct, across Standard Chartered Bank. This includes understanding and ensuring compliance with, in letter and spirit, all applicable laws, regulations, guidelines and the Group Code of Conduct. Effectively and collaboratively identify, escalate, mitigate and resolve risk, conduct and compliance matters. Key stakeholders Trade AML POC Business Trade Technology Other TTO stakeholders Other Responsibilities Adherence to Risk Data Quality Management Requirements Risk and Audit Continuous management of the Trade Application System risk Proactively identify issues and actions Monitor and remediate issues and actions from audits Awareness of the regulatory requirements and ensuring these are catered for in the platform design As Part Of Build Maintenance Model, will have to Support Production As and when required Embed Here for good and Group s brand and values in team; Perform other responsibilities assigned under Group, Country, Business or Functional policies and procedures; Multiple functions (double hats); Skills and Experience Microservices (OCP, Kubernetes) Hadoop SPARK SCALA Elastic Trade Risk AML Azure DevOps traditional ETL pipelines and/or analytics pipelines Qualifications TRAINING Machine Learning/AI experience- Optional CERTIFICATIONS QUANTEXA LANGUAGES AWS EKS, Azure AKS Angular, Microservices (OCP, Kubernetes) Hadoop SPARK SCALA Elastic About Standard Chartered Were an international bank, nimble enough to act, big enough for impact. For more than 170 years, weve worked to make a positive difference for our clients, communities, and each other. We question the status quo, love a challenge and enjoy finding new opportunities to grow and do better than before. If youre looking for a career with purpose and you want to work for a bank making a difference, we want to hear from you. You can count on us to celebrate your unique talents and we cant wait to see the talents you can bring us. Our purpose, to drive commerce and prosperity through our unique diversity, together with our brand promise, to be here for good are achieved by how we each live our valued behaviours. When you work with us, youll see how we value difference and advocate inclusion. Together we: Do the right thing and are assertive, challenge one another, and live with integrity, while putting the client at the heart of what we do Never settle, continuously striving to improve and innovate, keeping things simple and learning from doing well, and not so well Are better together, we can be ourselves, be inclusive, see more good in others, and work collectively to build for the long term What we offer In line with our Fair Pay Charter, we offer a competitive salary and benefits to support your mental, physical, financial and social wellbeing. Core bank funding for retirement savings, medical and life insurance, with flexible and voluntary benefits available in some locations. Time-off including annual leave, parental/maternity (20 weeks), sabbatical (12 months maximum) and volunteering leave (3 days), along with minimum global standards for annual and public holiday, which is combined to 30 days minimum. Flexible working options based around home and office locations, with flexible working patterns. Proactive wellbeing support through Unmind, a market-leading digital wellbeing platform, development courses for resilience and other human skills, global Employee Assistance Programme, sick leave, mental health first-aiders and all sorts of self-help toolkits A continuous learning culture to support your growth, with opportunities to reskill and upskill and access to physical, virtual and digital learning. Being part of an inclusive and values driven organisation, one that embraces and celebrates our unique diversity, across our teams, business functions and geographies - everyone feels respected and can realise their full potential. www. sc. com/careers 30152

Posted 1 week ago

Apply

5.0 - 9.0 years

9 - 13 Lacs

Noida

Work from Office

Naukri logo

Skill- Pyspark Databricks Lead who can manage deliverables from offshore, ensures adequate design/technical support is provided PySpark, Python, Databricks PySpark, Python/Scala, Databricks, Understanding on Streaming Pipeline, Docker Kubernetes Problem solving and technical troubleshooting skills Scala (Good to have) Mandatory Competencies Python - Python Big Data - PySpark Beh - Communication Data Science - Databricks Big Data - Scala At Iris Software, we offer world-class benefits designed to support the financial, health and well-being needs of our associates to help achieve harmony between their professional and personal growth. From comprehensive health insurance and competitive salaries to flexible work arrangements and ongoing learning opportunities, were committed to providing a supportive and rewarding work environment. Join us and experience the difference of working at a company that values its employees success and happiness.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Role: Data Engineer Experience Preferred: 5+ Years Location: Pune Job Description: A bachelor’s or master’s degree, preferably in Information Technology or related field (computer science, mathematics, etc.) focusing on data engineering. 5+ years of relevant experience as data engineer in Big Data is required. Strong Knowledge of programming languages (Python / Scala) and Big Data technologies (Spark, Databricks or equivalent) is required. Strong experience in executing complex data analysis and running complex SQL/Spark queries. Strong experience in building complex data transformations in SQL/Spark. Strong knowledge in Database technologies is required. Strong knowledge in Azure Cloud is advantageous. Good understanding and experience with Agile methodologies and delivery. Strong communication skills with the ability to build partnerships with stakeholders. Strong analytical, data management and problem-solving skills. Show more Show less

Posted 1 week ago

Apply

2.0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Responsibilities Analyses current business practices, processes, and procedures as well as identifying future business opportunities for leveraging Microsoft Azure Data & Analytics Services. Provide technical leadership and thought leadership as a senior member of the Analytics Practice in areas such as data access & ingestion, data processing, data integration, data modeling, database design & implementation, data visualization, and advanced analytics. Engage and collaborate with customers to understand business requirements/use cases and translate them into detailed technical specifications. Develop best practices including reusable code, libraries, patterns, and consumable frameworks for cloud-based data warehousing and ETL. Maintain best practice standards for the development or cloud-based data warehouse solutioning including naming standards. Designing and implementing highly performant data pipelines from multiple sources using Apache Spark and/or Azure Databricks Integrating the end-to-end data pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is always maintained Working with other members of the project team to support delivery of additional project components (API interfaces) Evaluating the performance and applicability of multiple tools against customer requirements Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints. Integrate Databricks with other technologies (Ingestion tools, Visualization tools). Proven experience working as a data engineer Highly proficient in using the spark framework (python and/or Scala) Extensive knowledge of Data Warehousing concepts, strategies, methodologies. Direct experience of building data pipelines using Azure Data Factory and Apache Spark (preferably in Databricks). Hands on experience designing and delivering solutions using Azure including Azure Storage, Azure SQL Data Warehouse, Azure Data Lake, Azure Cosmos DB, Azure Stream Analytics Experience in designing and hands-on development in cloud-based analytics solutions. Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required. Designing and building of data pipelines using API ingestion and Streaming ingestion methods. Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential. Thorough understanding of Azure Cloud Infrastructure offerings. Strong experience in common data warehouse modeling principles including Kimball. Working knowledge of Python is desirable Experience developing security models. Databricks & Azure Big Data Architecture Certification would be plus Mandatory Skill Sets ADE, ADB, ADF Preferred Skill Sets ADE, ADB, ADF Years Of Experience Required 2-10 Years Education Qualification BE, B.Tech, MCA, M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Engineering, Master Degree - Computer Applications, Bachelor of Technology Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Data Engineering, Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis {+ 12 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Job Description Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description & Summary: A career within…. A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities Design, develop, and optimize data pipelines and ETL processes using PySpark or Scala to extract, transform, and load large volumes of structured and unstructured data from diverse sources. Implement data ingestion, processing, and storage solutions on GCP cloud platform, leveraging services. Develop and maintain data models, schemas, and metadata to support efficient data access, query performance, and analytics requirements. Monitor pipeline performance, troubleshoot issues, and optimize data processing workflows for scalability, reliability, and cost-effectiveness. Implement data security and compliance measures to protect sensitive information and ensure regulatory compliance. Requirement Proven experience as a Data Engineer, with expertise in building and optimizing data pipelines using PySpark, Scala, and Apache Spark. Hands-on experience with cloud platforms, particularly GCP, and proficiency in GCP services. Strong programming skills in Python and Scala, with experience in software development, version control, and CI/CD practices. Familiarity with data warehousing concepts, dimensional modeling, and relational databases (e.g., SQL Server, PostgreSQL, MySQL). Experience with big data technologies and frameworks (e.g., Hadoop, Hive, HBase) is a plus. Mandatory Skill Sets GCP, Pyspark, Spark Preferred Skill Sets GCP, Pyspark, Spark Years Of Experience Required 4 - 8 Education Qualification B.Tech / M.Tech / MBA / MCA Education (if blank, degree and/or field of study not specified) Degrees/Field Of Study Required Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills PySpark, Spark SQL Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline, Data Quality, Data Transformation, Data Validation {+ 18 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Line of Service Advisory Industry/Sector Not Applicable Specialism SAP Management Level Senior Associate Job Description & Summary A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Creating business intelligence from data requires an understanding of the business, the data, and the technology used to store and analyse that data. Using our Rapid Business Intelligence Solutions, data visualisation and integrated reporting dashboards, we can deliver agile, highly interactive reporting and analytics that help our clients to more effectively run their business and understand what business questions can be answered and how to unlock the answers. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within…. A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities: Design, develop, and optimize data pipelines and ETL processes using PySpark or Scala to extract, transform, and load large volumes of structured and unstructured data from diverse sources. Implement data ingestion, processing, and storage solutions on Azure cloud platform, leveraging services such as Azure Databricks, Azure Data Lake Storage, and Azure Synapse Analytics. Develop and maintain data models, schemas, and metadata to support efficient data access, query performance, and analytics requirements. Monitor pipeline performance, troubleshoot issues, and optimize data processing workflows for scalability, reliability, and cost-effectiveness. Implement data security and compliance measures to protect sensitive information and ensure regulatory compliance. Requirement Proven experience as a Data Engineer, with expertise in building and optimizing data pipelines using PySpark, Scala, and Apache Spark. Hands-on experience with cloud platforms, particularly Azure, and proficiency in Azure services such as Azure Databricks, Azure Data Lake Storage, Azure Synapse Analytics, and Azure SQL Database. Strong programming skills in Python and Scala, with experience in software development, version control, and CI/CD practices. Familiarity with data warehousing concepts, dimensional modeling, and relational databases (e.g., SQL Server, PostgreSQL, MySQL). Experience with big data technologies and frameworks (e.g., Hadoop, Hive, HBase) is a plus. Mandatory skill sets: Spark, Pyspark, Azure Preferred skill sets: Spark, Pyspark, Azure Years of experience required: 4 - 8 Education qualification: B.Tech / M.Tech / MBA / MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Bachelor of Engineering, Master of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills PySpark, Python (Programming Language) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Reasoning, Analytical Thinking, Application Software, Business Data Analytics, Business Management, Business Technology, Business Transformation, Communication, Creativity, Documentation Development, Embracing Change, Emotional Regulation, Empathy, Implementation Research, Implementation Support, Implementing Technology, Inclusion, Intellectual Curiosity, Learning Agility, Optimism, Performance Assessment, Performance Management Software {+ 16 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities Design, develop, and optimize data pipelines and ETL processes using PySpark or Scala to extract, transform, and load large volumes of structured and unstructured data from diverse sources. Implement data ingestion, processing, and storage solutions on GCP cloud platform, leveraging services. Develop and maintain data models, schemas, and metadata to support efficient data access, query performance, and analytics requirements. Monitor pipeline performance, troubleshoot issues, and optimize data processing workflows for scalability, reliability, and cost-effectiveness. Implement data security and compliance measures to protect sensitive information and ensure regulatory compliance. Requirement Proven experience as a Data Engineer, with expertise in building and optimizing data pipelines using PySpark, Scala, and Apache Spark. Hands-on experience with cloud platforms, particularly GCP, and proficiency in GCP services. Strong programming skills in Python and Scala, with experience in software development, version control, and CI/CD practices. Familiarity with data warehousing concepts, dimensional modeling, and relational databases (e.g., SQL Server, PostgreSQL, MySQL). Experience with big data technologies and frameworks (e.g., Hadoop, Hive, HBase) is a plus. Mandatory Skill Sets GCP, Pyspark, Spark Preferred Skill Sets GCP, Pyspark, Spark Years Of Experience Required 4 - 8 Education Qualification B.Tech / M.Tech / MBA / MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Master of Engineering, Bachelor of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Good Clinical Practice (GCP) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis {+ 12 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Chandigarh, India

On-site

Linkedin logo

Company Profile Oceaneering is a global provider of engineered services and products, primarily to the offshore energy industry. We develop products and services for use throughout the lifecycle of an offshore oilfield, from drilling to decommissioning. We operate the world's premier fleet of work class ROVs. Additionally, we are a leader in offshore oilfield maintenance services, umbilicals, subsea hardware, and tooling. We also use applied technology expertise to serve the defense, entertainment, material handling, aerospace, science, and renewable energy industries. Since year 2003, Oceaneering’s India Center has been an integral part of operations for Oceaneering’s robust product and service offerings across the globe. This center caters to diverse business needs, from oil and gas field infrastructure, subsea robotics to automated material handling & logistics. Our multidisciplinary team offers a wide spectrum of solutions, encompassing Subsea Engineering, Robotics, Automation, Control Systems, Software Development, Asset Integrity Management, Inspection, ROV operations, Field Network Management, Graphics Design & Animation, and more. In addition to these technical functions, Oceaneering India Center plays host to several crucial business functions, including Finance, Supply Chain Management (SCM), Information Technology (IT), Human Resources (HR), and Health, Safety & Environment (HSE). Our world class infrastructure in India includes modern offices, industry-leading tools and software, equipped labs, and beautiful campuses aligned with the future way of work. Oceaneering in India as well as globally has a great work culture that is flexible, transparent, and collaborative with great team synergy. At Oceaneering India Center, we take pride in “Solving the Unsolvable” by leveraging the diverse expertise within our team. Join us in shaping the future of technology and engineering solutions on a global scale. Position Summary Position Summary and Location Assist with building, maintaining, and optimizing data pipelines, ensuring data flows efficiently across systems. You will work closely with senior data engineers and data analysts to support data integration, ETL (Extract, Transform, Load) processes, and overall data infrastructure. Duties And Responsibilities Assist in designing, building, and maintaining scalable data pipelines to move data from various sources to the data warehouse or data lake. Help integrate data from various internal and external sources, including databases, APIs, and flat files, into centralized systems. Assist in data migration projects by writing scripts to move data between systems while ensuring data quality and integrity. Collaborate with the data quality team to ensure that data is accurate, consistent, and reliable. Implement basic data validation rules and participate in data quality checks to identify and fix data anomalies or errors. Assist in the management of databases, including tasks such as creating tables, writing SQL queries, and optimizing database performance. Support efforts to ensure efficient data storage, indexing, and retrieval for analytics and reporting purposes. Work closely with data analysts, business intelligence teams, and other stakeholders to understand data requirements and support their data needs. Provide data extracts, reports, and documentation as requested by business users and analysts. Assist in creating technical documentation for data models, pipelines, and integration processes. Supervisory Responsibilities This position has/does not have direct supervisory responsibilities. Reporting Relationship Sr. Manager, Data Estate – Business Intelligence Qualifications Bachelor’s degree in computer science, Information Systems, Engineering, Mathematics, or a related field. Relevant coursework or projects involving data management, databases, or data engineering is highly desirable. Knowledge, Skills, Abilities, And Other Characteristics Basic understanding of data structures, algorithms, and database management systems (SQL and NoSQL). Familiarity with SQL for querying databases and manipulating data. Some experience with scripting languages like Python, Java, or Scala for data processing tasks. Knowledge of data warehousing concepts and ETL processes is a plus. Exposure to cloud platforms (AWS, Azure, Google Cloud) or data tools (e.g., Apache Spark, Hadoop) is an advantage but not required. Strong problem-solving skills and the ability to troubleshoot data-related issues. Detail-oriented with a focus on data accuracy and quality. Preferred Qualifications Internship or hands-on project experience in data engineering or a related field is a plus. Experience working with data integration tools, cloud platforms, or big data technologies will be an added advantage. Familiarity with version control tools such as Git is beneficial. Closing Statement In addition, we make a priority of providing learning and development opportunities to enable employees to achieve their potential and take charge of their future. As well as developing employees in a specific role, we are committed to lifelong learning and ongoing education, including developing people skills and identifying future supervisors and managers. Every month, hundreds of employees are provided training, including HSE awareness, apprenticeships, entry and advanced level technical courses, management development seminars, and leadership and supervisory training. We have a strong ethos of internal promotion. We can offer long-term employment and career advancement across countries and continents. Working at Oceaneering means that if you have the ability, drive, and ambition to take charge of your future-you will be supported to do so and the possibilities are endless. Equal Opportunity/Inclusion Oceaneering’s policy is to provide equal employment opportunity to all applicants. Show more Show less

Posted 1 week ago

Apply

2.0 years

0 Lacs

India

Remote

Linkedin logo

🏢 Company: Natlov Technologies Pvt Ltd 🕒 Experience Required: 1–2 Years 🌐 Location: Remote (India-based candidates preferred) 🧠 About the Role: We are seeking passionate Data Engineers with hands-on experience in building scalable, distributed data systems and high-volume transaction applications. Join us to work with modern Big Data technologies and cloud platforms to architect, stream, and analyze data efficiently. 🛠️ What We’re Looking For (Experience: 1–2 Years): 🔹 Strong hands-on programming experience in Scala , Python , and other object-oriented languages 🔹 Experience in building distributed/scalable systems and high-volume transaction applications 🔹 Solid understanding of Big Data technologies: • Apache Spark (Structured & Real-Time Streaming) • Apache Kafka • Delta Lake 🔹 Experience with ETL workflows using MapReduce , Spark , and Hadoop 🔹 Proficiency in SQL querying and SQL Server Management Studio (SSMS) 🔹 Experience with Snowflake or Databricks 🔹 Dashboarding and reporting using Power BI 🔹 Familiarity with Kafka , Zookeeper , and YARN for ingestion and orchestration 🔹 Strong analytical and problem-solving skills 🔹 Energetic, motivated, and eager to learn and grow in a collaborative team environment 📍 Work Mode: Remote 📩 How to Apply: Send your resume to techhr@natlov.com Be a part of a passionate and forward-thinking team at Natlov Technologies Pvt Ltd , where we're redefining how data is architected, streamed, analyzed, and delivered. Let’s build the future of data together! 💼 #DataEngineer #BigData #ApacheSpark #Kafka #DeltaLake #SQL #PowerBI #Databricks #Snowflake #ETL #Python #Scala #SSMS #HiringNow #NatlovTechnologies #1to2YearsExperience #TechJobs #CareerOpportunity #RemoteJobs Show more Show less

Posted 1 week ago

Apply

5.0 - 8.0 years

7 - 14 Lacs

Pune, Chennai

Hybrid

Naukri logo

Senior Data Engineer Experience: 6-8 Years Relevant Exp : 5 Years Location : Pune/Chennai Notice Period : Immediate Joiner Requirements: Mandatory Skills : SPARK, PYTHON, SQL Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 5+ years of experience as a Data Engineer or similar role. Proficiency in Spark, Scala, Hive, Python, and Snowflake. Strong understanding of ETL processes and data warehousing concepts. Experience with SQL and NoSQL databases. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Ability to work independently and as part of a team. Preferred Qualifications: Experience with cloud services (e.g., AWS, GCP, Azure, Oracle Cloud). Knowledge of Docker and Kubernetes. Interested candidates can share the updated resumes to the email mentioned below shri.lakshmi@cielhr.com

Posted 1 week ago

Apply

0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Job Description Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within…. A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities Design, develop, and optimize data pipelines and ETL processes using PySpark or Scala to extract, transform, and load large volumes of structured and unstructured data from diverse sources. Implement data ingestion, processing, and storage solutions on GCP cloud platform, leveraging services. Develop and maintain data models, schemas, and metadata to support efficient data access, query performance, and analytics requirements. Monitor pipeline performance, troubleshoot issues, and optimize data processing workflows for scalability, reliability, and cost-effectiveness. Implement data security and compliance measures to protect sensitive information and ensure regulatory compliance. Requirement Proven experience as a Data Engineer, with expertise in building and optimizing data pipelines using PySpark, Scala, and Apache Spark. Hands-on experience with cloud platforms, particularly GCP, and proficiency in GCP services. Strong programming skills in Python and Scala, with experience in software development, version control, and CI/CD practices. Familiarity with data warehousing concepts, dimensional modeling, and relational databases (e.g., SQL Server, PostgreSQL, MySQL). Experience with big data technologies and frameworks (e.g., Hadoop, Hive, HBase) is a plus. Mandatory Skill Sets GCP, Pyspark, Spark Preferred Skill Sets GCP, Pyspark, Spark Years Of Experience Required 4 - 8 Education Qualification B.Tech / M.Tech / MBA / MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Business Administration, Bachelor of Technology Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills PySpark, Spark SQL Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline, Data Quality, Data Transformation, Data Validation {+ 18 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Description Amazon Finance Operations Global Data Analytics (GDA) science team seeks a Sr. Data Scientist with the technical expertise and business intuition to invent the future of Accounts Payable at Amazon. As a key member of the science team, the Data Scientist will own high-visibility analyses, methodology, and algorithms in the Procure-to-Pay lifecycle to drive free cash flow improvements for Amazon Finance Operations. This is a unique opportunity in a growing data science and economics team with a charter to optimize operations and planning with complex trade-offs between customer experience, cash flow, and operational efficiencies in our payment processes. Key job responsibilities The Sr. Data Scientist's responsibilities include, but are not limited to the following points: Manage relationships with business and operational stakeholders and product managers to innovate on behalf of customers, develop novel applications data science methodologies, and partner with engineers and scientists to design, develop, and scale machine learning models. Define the vision for data science in the accounts payable space in partnership with process and technology leaders. Extract and analyze large amounts of data related to suppliers and associated business functions. Adapt statistical and machine learning methodologies for Finance Operations by developing and testing models, running computational experiments, and fine-tuning model parameters. Review and recommend improvements to science models and architecture as they relate to accounts payable process and tools. Use computational methods to identify relationships between data and business outcomes, define outliers and anomalies, and justify those outcomes to business customers. Communicate verbally and in writing to business customers with various levels of technical knowledge, educate stakeholders on our research, data science, and ML practice, and deliver actionable insights and recommendations. Serve as a point of contact for questions from business and operations leaders. Develop code to analyze data (SQL, PySpark, Scala, etc.) and build statistical and machine learning models and algorithms (Python, R, Scala, etc.). A day in the life As a successful data scientist in GDA’s Science team, you will dive deep on data from Amazon's payment practices and customer support functions, extract new assets, drive investigations and algorithm development, and interface with technical and non-technical customers. You will leverage your data science expertise and communication skills to pivot between delivering science solutions, translating knowledge of finance and operational processes into models, and communicating insights and recommendations to audiences of varying levels of technical sophistication in support of specific business questions, root cause analysis, planning, and innovation for the future. The role will work in a genuinely global environment, across various functional teams; with daily interaction across North America and Europe. About The Team Global Data Analytics (GDA) supports decisions in AR and AP. In close cooperation with our stakeholders, we agree and build uniform metrics; use data from a ‘single source of truth’; provide automated, self-service, standard reporting; and build predictive analytics. Our topmost ambition is to actively contribute to the improvement of Amazon's Free Cash Flow by value-adding analytics. Our success is built on users' trust in our data and the reliability of our analytics tools. GDA’s data scientists and economists further that mission with rigorous statistical, econometric, and ML models to compliment reporting and analysis developed by GDA’s analytical, BI, and Finance professionals. Basic Qualifications 5+ years of data querying languages (e.g. SQL), scripting languages (e.g. Python) or statistical/mathematical software (e.g. R, SAS, Matlab, etc.) experience 4+ years of data scientist experience 5+ years of data scientist or similar role involving data extraction, analysis, statistical modeling and communication experience Preferred Qualifications 3+ years of data visualization using AWS QuickSight, Tableau, R Shiny, etc. experience Experience managing data pipelines Experience as a leader and mentor on a data science team Master's degree in a quantitative field such as statistics, mathematics, data science, business analytics, economics, finance, engineering, or computer science Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2942802 Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Job Description Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within…. A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities Design, develop, and optimize data pipelines and ETL processes using PySpark or Scala to extract, transform, and load large volumes of structured and unstructured data from diverse sources. Implement data ingestion, processing, and storage solutions on GCP cloud platform, leveraging services. Develop and maintain data models, schemas, and metadata to support efficient data access, query performance, and analytics requirements. Monitor pipeline performance, troubleshoot issues, and optimize data processing workflows for scalability, reliability, and cost-effectiveness. Implement data security and compliance measures to protect sensitive information and ensure regulatory compliance. Requirement Proven experience as a Data Engineer, with expertise in building and optimizing data pipelines using PySpark, Scala, and Apache Spark. Hands-on experience with cloud platforms, particularly GCP, and proficiency in GCP services. Strong programming skills in Python and Scala, with experience in software development, version control, and CI/CD practices. Familiarity with data warehousing concepts, dimensional modeling, and relational databases (e.g., SQL Server, PostgreSQL, MySQL). Experience with big data technologies and frameworks (e.g., Hadoop, Hive, HBase) is a plus. Mandatory Skill Sets GCP, Pyspark, Spark Preferred Skill Sets GCP, Pyspark, Spark Years Of Experience Required 4 - 8 Education Qualification B.Tech / M.Tech / MBA / MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Bachelor of Technology, Master of Business Administration Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills PySpark, Spark SQL Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline, Data Quality, Data Transformation, Data Validation {+ 18 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date Show more Show less

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies