Home
Jobs

4562 Pyspark Jobs - Page 17

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

150.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Description OUR IMPACT Platform Solutions, Goldman Sachs delivers a broad range of financial services across investment banking, securities, investment management and consumer banking to a large and diversified client base that includes corporations, financial institutions, governments, and individuals. Clients embed innovative financial products and solutions that create customer-centered experiences, powered by Goldman Sachs. The businesses of Platform Solutions share a developer-centric mindset and cloud-native platforms. We utilize financial products, including credit cards, installment financing and high yield savings accounts into the ecosystems of major brands to serve millions of loyal customers. We make it easy to offer a range of financial products powered by an API-first platform with the backing of Goldman Sachs' 150+ years of financial expertise. We offer a customized deployment approach while providing a modern, agile technology stack all supported by our long history of financial expertise, risk management and regulatory knowledge. In Platform Solutions (PS), We Power Clients With Innovative And Customer-centered Financial Products. We Bring The Best Qualities Of a Technology Player And Combine That With The Best Attributes Of a Large Bank. PS Is Comprised Of Four Main Businesses, Underpinned By Engineering, Operations And Risk Management Transaction Banking, a cash management and payments platform for clients building a corporate treasury system Enterprise Partnerships, consumer financial products that companies embed directly within their ecosystems to better serve their end customers ETF Accelerator, a platform for clients to launch, list and manage exchange-traded funds Join us on our journey to deliver financial products and platforms that prioritize the customer and developer experience. Your Impact This position will play a key role on the First Line Risk and Control team, supporting Consumer Monitoring & Testing and driving the implementation of horizontal Consumer risk programs. This individual will be responsible for executing risk-based testing, liasing with product, operations, compliance, and legal teams to ensure regulatory adherence. The role will also provide the opportunity to drive development and enhancement of risk and control programs Execute testing and monitoring of regulatory, policy and process compliance Gather and synthesize data to determine root causes and trends related to testing failures Propose effective and efficient methods to enhance testing and sampling strategies (including automation) to ensure the most effective risk detection, analyses and control solutions Proactively identify potential business risks, process deficiencies and improvement opportunities and make recommendations for additional controls and corrective action to enhance the efficiency and effectiveness of risk mitigation processes Maintain effective communication with stakeholders and support teams in remediation of testing errors; assist with implementation of corrective actions related to testing fails and non-compliance with policies and procedures Identify continuous improvement opportunities to meet changing requirements, driving maximum visibility to the executive audience Work closely with enterprise risk teams to ensure business line risks are being shared and rolled up to firm-wide risk summaries Your Skills 2-4 years of testing, audit, or compliance experience in consumer financial services Bachelor’s degree or equivalent military experience Knowledge of applicable U.S. federal and state consumer lending laws and regulations as well as industry association standards, including, among others, Truth in Lending Act (Reg Z), Equal Credit Opportunity Act (Reg B), Fair Credit Reporting Act (Reg V), UDAAP Understanding of test automation framework like data driven, hybrid driven etc Knowledge of testing concepts, methodologies, and technologies Genuine excitement and passion for leading root cause analysis, troubleshooting technical process failures and implementing fixes to operationalize a process Analytical, critical thinking and problem solving skills Highly motivated self-starter with strong organizational skills, attention to detail, and the ability to remain organized in a fast-paced environment Interpersonal, and relationship management skills Integrity, ethical standards, and sound judgment; ability to exercise discretion with respect to sensitive information Ability to summarize observations and present in a clear, concise manner to peers, managers and senior Consumer Compliance management Quickly grasp complex concepts, including global business and regulatory matters Confidence in expressing a point of view with management Plus: CPA, Audit experience, CRCM, proficiency in Aquadata studio, Snowflake, Splunk, Excel macros,Tableau, Hadoop/PySpark/Spark/Python/R, CPA, Audit experience, CRCM About Goldman Sachs At Goldman Sachs, we commit our people, capital and ideas to help our clients, shareholders and the communities we serve to grow. Founded in 1869, we are a leading global investment banking, securities and investment management firm. Headquartered in New York, we maintain offices around the world. We believe who you are makes you better at what you do. We're committed to fostering and advancing diversity and inclusion in our own workplace and beyond by ensuring every individual within our firm has a number of opportunities to grow professionally and personally, from our training and development opportunities and firmwide networks to benefits, wellness and personal finance offerings and mindfulness programs. Learn more about our culture, benefits, and people at GS.com/careers. We’re committed to finding reasonable accommodations for candidates with special needs or disabilities during our recruiting process. Learn more: https://www.goldmansachs.com/careers/footer/disability-statement.html Show more Show less

Posted 3 days ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Who We Are With Citi’s Analytics & Information Management (AIM) group, you will do meaningful work from Day 1. Our collaborative and respectful culture lets people grow and make a difference in one of the world’s leading Financial Services Organizations. The purpose of the group is to use Citi’s data assets to analyze information and create actionable intelligence for our business leaders. We value what makes you unique so that you have opportunity to shine. You also get the opportunity to work with the best minds and top leaders in the analytics space. What The Role Is The role of Officer – Data Management/Information Analyst will be part of AIM team based out of Bengaluru, India supporting the Global Workforce Optimization unit. The GWFO team supports capacity planning across the organization. The primary responsibility of the GWFO team is to forecast future demand (Inbound /Outbound Call Volume, back-office process volume, etc.) and the capacity required to fulfill the demand. It also includes forecasting short-term demand (daily/ hourly) and scheduling the agent accordingly. The GWFO team is also responsible for collaborating with multiple stakeholders and coming up with optimal hiring plans to ensure adequate capacity as well as optimize the operational budget. In this role, you will work along a highly talented team of analyst to build data solutions to track key business metrics and support the workforce optimization activities. You will be responsible for understanding and mapping out the data landscape for current and new businesses that are onboarded by GWFO and design the data store and pipes needed to provide for capacity planning, reporting and analytics, as well as real-time monitoring. You would work very closely with GWFO’s technology partners in getting these solutions implemented in a compliant environment. Who You Are Data Driven. A proven track record of enabling decision making and problem solving with data. Conceptual thinking skills must be complemented by a strong quantitative orientation and data driven approach. Excellent Problem Solver. You are a critical thinker, able to ask right questions, make sense of a situation and come up with intelligent solutions. Strong Team Player. You build a trusted relationships with your team members. You are ready to offer unconditional assistance, will listen, share knowledge, and are always ready to provide support as needed. Strong Communicator. You can communicate verbally and through written communication with clarity and can structure and present your work to your partners & leadership. Clear Results Orientation . You display a keen focus on achieving both short and long-term goals and have experience driving and executing an agenda in a demanding and fast-paced environment, with an eye on risks & controls. Innovative. You are always challenging yourself and your team to find better and faster ways of doing things. What You Do Data Exploration. Understand underlying data sources by dwelling into multiple platforms scattered across the organization. You do what it takes to gather information by connecting with people across business teams and technology. Build Data Assets. You have a strong data design background and are capable of developing and building multi-dimensional data assets and pipes that captures abundant information about various line of business. Process & Controls Orientation. You develop strong processes, and indestructible controls to address risk and seek to propagate that culture to become the core value of your team. Dashboarding and Visualization. You develop insightful, visually compelling and engaging dashboards that supports decision making and drive adoption. Flawless Execution. You manage and sequence delivery of reporting and data needs by actively managing requests against available bandwidth and identify opportunities for improved productivity. Be an Enabler. You support your team and help them accomplish their goals with empathy. You act as a facilitator and remove blockers and create a positive atmosphere for then to be innovative and productive at work. Qualifications Must have 3+ years of work experience largely in the Data Management / engineering space. Must have expertise working with SQL. Must have expertise working with PySpark/Python for data extraction and deep dive activities Prior experience in an Operations role is desirable Good to have working experience on MS Office Package (Excel, Outlook, PowerPoint, etc. with VBA) and/or BI Visualization tools like Tableau is a plus. ------------------------------------------------------ Job Family Group: Decision Management ------------------------------------------------------ Job Family: Data/Information Management ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less

Posted 3 days ago

Apply

3.0 - 5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Title Data Engineer Job Description Data Engineer !! Hello, we’re IG Group. We’re a global, FTSE 250-listed company made up of a collection of progressive fintech brands in the world of online trading and investing. The best part? We’ve snapped up many awards for our top-class platforms, forward-thinking products, and incredible employee experiences. Who We’re Looking For You’re curious about things like the client experience, the rapid developments in tech, and the complex world of fintech regulation. You’re also a confident, creative thinker with a knack for innovating. We know that you know every problem has a solution. Here, you can try new ideas, and lead the way in creating inspiring experiences for our clients and everyone around you. We don’t fit the corporate stereotype. If you want to work for a traditional, suit-and-tie corporate that just gives you a pay cheque at the end of the month, we might not be for you. But, if you have that IG Group energy and you can stand behind what we believe in, let’s raise the bar together. About The Team We are looking for a Data Engineer for our team in our Bangalore office. The role, as well as the projects in which you will participate on, is crucial for the entire IG. Data Engineering is responsible to collect data from various sources and generate insights for our business stakeholders. As a Data engineer you will be responsible to the delivery of our projects and participate in the whole project life cycle (development and delivery) applying Agile best practices and you will also ensure good quality engineering . You will be working other technical teams members to build ingestion pipeline, build a shared company-wide Data platform in GCP as well as supporting and evolving our wide range of services in the cloud You will be owning the development and support of our applications which also include our out-of-ours support rota. The Skills You'll Need You will be someone who can demonstrate: Good understanding of IT development life cycle with focus on quality and continuous delivery and integration 3 - 5 years of experience in Python, Data processing - (pandas/pyspark), & SQL Good experience Cloud - GCP Good communications skills being able to communicate technical concepts to non-technical audience. Proven experience in working on Agile environments. Experience on working in data related projects from data ingestion to analytics and reporting. Good understanding of Big Data and distributed computes framework such as Spark for both batch and streaming workloads Familiar with kafka and different data formats AVRO/Parquet/ORC/Json. It Would Be Great If You Have Experience On GitLab Containerisation (Nomad or Kubernetes). How You’ll Grow When you join IG Group, we want you to have more than a job – we want you to have a career. And you can. If you spot an opportunity, we want you to chase it. Stretch yourself, challenge your self-beliefs and go for the things you dream of. With internal and external learning opportunities and the tools to help you skyrocket to success, we’ll support you all the way. And these opportunities truly are endless because we have some bold targets. We plan to expand our global presence, increase revenue growth, and ultimately deliver the world’s best trading experience. We’d love to have you along for the ride. The Perks It really is more than a job. We’ll recognise your talent and make sure that you can still have a life – at work, and outside of it. Networks, committees, awards, sports and social clubs, mentorships, volunteering opportunities, extra time off… the list goes on. Matched giving for your fundraising activity. Flexible working hours and work-from-home opportunities. Performance-related bonuses. Insurance and medical plans. Career-focused technical and leadership training. Contribution to gym memberships and more. A day off on your birthday. Two days’ volunteering leaves per year. Where You’ll Work We follow a hybrid working model; we reckon it’s the best of both worlds. This model also feeds into our secret ingredients for innovation: diversity, flexibility, and close connection. Plus, you’ll be welcomed into a diverse and inclusive workforce with a lot of creative energy. Ask our employees what their favourite thing is about working at IG, and you’ll hear an echo of ‘our culture’! That’s because you can come to work as your authentic self. The things that make you, you – like your ethnicity, sexual orientation, faith, age, gender identity/expression or physical capacity – can bring a fresh perspective or new skill to our business. That’s why we welcome people from various walks of life; and anyone who wants to help us realise our vision and strategy. So, if you’re keen to connect with our values, and lead the charge on innovation, you know what to do. Apply! Number of openings 1 Show more Show less

Posted 3 days ago

Apply

5.0 - 10.0 years

15 - 25 Lacs

Hyderabad/Secunderabad, Bangalore/Bengaluru, Delhi / NCR

Hybrid

Naukri logo

Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose the relentless pursuit of a world that works better for people – we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead Consultant-Data Engineer, AWS+Python, Spark, Kafka for ETL! Responsibilities Develop, deploy, and manage ETL pipelines using AWS services, Python, Spark, and Kafka. Integrate structured and unstructured data from various data sources into data lakes and data warehouses. Design and deploy scalable, highly available, and fault-tolerant AWS data processes using AWS data services (Glue, Lambda, Step, Redshift) Monitor and optimize the performance of cloud resources to ensure efficient utilization and cost-effectiveness. Implement and maintain security measures to protect data and systems within the AWS environment, including IAM policies, security groups, and encryption mechanisms. Migrate the application data from legacy databases to Cloud based solutions (Redshift, DynamoDB, etc) for high availability with low cost Develop application programs using Big Data technologies like Apache Hadoop, Apache Spark, etc with appropriate cloud-based services like Amazon AWS, etc. Build data pipelines by building ETL processes (Extract-Transform-Load) Implement backup, disaster recovery, and business continuity strategies for cloud-based applications and data. Responsible for analysing business and functional requirements which involves a review of existing system configurations and operating methodologies as well as understanding evolving business needs Analyse requirements/User stories at the business meetings and strategize the impact of requirements on different platforms/applications, convert the business requirements into technical requirements Participating in design reviews to provide input on functional requirements, product designs, schedules and/or potential problems Understand current application infrastructure and suggest Cloud based solutions which reduces operational cost, requires minimal maintenance but provides high availability with improved security Perform unit testing on the modified software to ensure that the new functionality is working as expected while existing functionalities continue to work in the same way Coordinate with release management, other supporting teams to deploy changes in production environment Qualifications we seek in you! Minimum Qualifications Experience in designing, implementing data pipelines, build data applications, data migration on AWS Strong experience of implementing data lake using AWS services like Glue, Lambda, Step, Redshift Experience of Databricks will be added advantage Strong experience in Python and SQL Proven expertise in AWS services such as S3, Lambda, Glue, EMR, and Redshift. Advanced programming skills in Python for data processing and automation. Hands-on experience with Apache Spark for large-scale data processing. Experience with Apache Kafka for real-time data streaming and event processing. Proficiency in SQL for data querying and transformation. Strong understanding of security principles and best practices for cloud-based environments. Experience with monitoring tools and implementing proactive measures to ensure system availability and performance. Excellent problem-solving skills and ability to troubleshoot complex issues in a distributed, cloud-based environment. Strong communication and collaboration skills to work effectively with cross-functional teams. Preferred Qualifications/ Skills Master’s Degree-Computer Science, Electronics, Electrical. AWS Data Engineering & Cloud certifications, Databricks certifications Experience with multiple data integration technologies and cloud platforms Knowledge of Change & Incident Management process Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.

Posted 3 days ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

We are looking for an immediate joiner and experienced Big Data Developer with a strong background in Kafka, PySpark, Python/Scala, Spark, SQL, and the Hadoop ecosystem. The ideal candidate should have over 5 years of experience and be ready to join immediately. This role requires hands-on expertise in big data technologies and the ability to design and implement robust data processing solutions. Responsibilities Design, develop, and maintain scalable data processing pipelines using Kafka, PySpark, Python/Scala, and Spark. Work extensively with the Kafka and Hadoop ecosystem, including HDFS, Hive, and other related technologies. Write efficient SQL queries for data extraction, transformation, and analysis. Implement and manage Kafka streams for real-time data processing. Utilize scheduling tools to automate data workflows and processes. Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions. Ensure data quality and integrity by implementing robust data validation processes. Optimize existing data processes for performance and scalability. Requirements Experience with GCP. Knowledge of data warehousing concepts and best practices. Familiarity with machine learning and data analysis tools. Understanding of data governance and compliance standards. This job was posted by Arun Kumar K from krtrimaIQ Cognitive Solutions. Show more Show less

Posted 3 days ago

Apply

5.0 years

0 Lacs

India

On-site

Linkedin logo

About Oportun Oportun (Nasdaq: OPRT) is a mission-driven fintech that puts its 2.0 million members' financial goals within reach. With intelligent borrowing, savings, and budgeting capabilities, Oportun empowers members with the confidence to build a better financial future. Since inception, Oportun has provided more than $16.6 billion in responsible and affordable credit, saved its members more than $2.4 billion in interest and fees, and helped its members save an average of more than $1,800 annually. Oportun has been certified as a Community Development Financial Institution (CDFI) since 2009. WORKING AT OPORTUN Working at Oportun means enjoying a differentiated experience of being part of a team that fosters a diverse, equitable and inclusive culture where we all feel a sense of belonging and are encouraged to share our perspectives. This inclusive culture is directly connected to our organization's performance and ability to fulfill our mission of delivering affordable credit to those left out of the financial mainstream. We celebrate and nurture our inclusive culture through our employee resource groups. Position Overview As a Sr. Data Engineer at Oportun, you will be a key member of our team, responsible for designing, developing, and maintaining sophisticated software / data platforms in achieving the charter of the engineering group. Your mastery of a technical domain enables you to take up business problems and solve them with a technical solution. With your depth of expertise and leadership abilities, you will actively contribute to architectural decisions, mentor junior engineers, and collaborate closely with cross-functional teams to deliver high-quality, scalable software solutions that advance our impact in the market. This is a role where you will have the opportunity to take up responsibility in leading the technology effort – from technical requirements gathering to final successful delivery of the product - for large initiatives (cross-functional and multi-month-long projects). Responsibilities Data Architecture and Design: Lead the design and implementation of scalable, efficient, and robust data architectures to meet business needs and analytical requirements. Collaborate with stakeholders to understand data requirements, build subject matter expertise, and define optimal data models and structures. Data Pipeline Development And Optimization Design and develop data pipelines, ETL processes, and data integration solutions for ingesting, processing, and transforming large volumes of structured and unstructured data. Optimize data pipelines for performance, reliability, and scalability. Database Management And Optimization Oversee the management and maintenance of databases, data warehouses, and data lakes to ensure high performance, data integrity, and security. Implement and manage ETL processes for efficient data loading and retrieval. Data Quality And Governance Establish and enforce data quality standards, validation rules, and data governance practices to ensure data accuracy, consistency, and compliance with regulations. Drive initiatives to improve data quality and documentation of data assets. Mentorship And Leadership Provide technical leadership and mentorship to junior team members, assisting in their skill development and growth. Lead and participate in code reviews, ensuring best practices and high-quality code. Collaboration And Stakeholder Management Collaborate with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand their data needs and deliver solutions that meet those needs. Communicate effectively with non-technical stakeholders to translate technical concepts into actionable insights and business value. Performance Monitoring And Optimization Implement monitoring systems and practices to track data pipeline performance, identify bottlenecks, and optimize for improved efficiency and scalability. Common Requirements You have a strong understanding of a business or system domain with sufficient knowledge & expertise around the appropriate metrics and trends. You collaborate closely with product managers, designers, and fellow engineers to understand business needs and translate them into effective solutions. You provide technical leadership and expertise, guiding the team in making sound architectural decisions and solving challenging technical problems. Your solutions anticipate scale, reliability, monitoring, integration, and extensibility. You conduct code reviews and provide constructive feedback to ensure code quality, performance, and maintainability. You mentor and coach junior engineers, fostering a culture of continuous learning, growth, and technical excellence within the team. You play a significant role in the ongoing evolution and refinement of current tools and applications used by the team, and drive adoption of new practices within your team. You take ownership of (customer) issues, including initial troubleshooting, identification of root cause and issue escalation or resolution, while maintaining the overall reliability and performance of our systems. You set the benchmark for responsiveness and ownership and overall accountability of engineering systems. You independently drive and lead multiple features, contribute to (a) large project(s) and lead smaller projects. You can orchestrate work that spans multiples engineers within your team and keep all relevant stakeholders informed. You support your lead/EM about your work and that of the team, that they need to share with the stakeholders, including escalation of issues Qualifications Bachelor's or Master's degree in Computer Science, Data Science, or a related field. 5+ years of experience in data engineering, with a focus on data architecture, ETL, and database management. Proficiency in programming languages like Python/PySpark and Java or Scala Expertise in big data technologies such as Hadoop, Spark, Kafka, etc. In-depth knowledge of SQL and experience with various database technologies (e.g., PostgreSQL, MariaDB, NoSQL databases). Experience and expertise in building complex end-to-end data pipelines. Experience with orchestration and designing job schedules using the CICD tools like Jenkins, Airflow or Databricks Ability to work in an Agile environment (Scrum, Lean, Kanban, etc) Ability to mentor junior team members. Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and their data services (e.g., AWS Redshift, S3, Azure SQL Data Warehouse). Strong leadership, problem-solving, and decision-making skills. Excellent communication and collaboration abilities. Familiarity or certification in Databricks is a plus. We are proud to be an Equal Opportunity Employer and consider all qualified applicants for employment opportunities without regard to race, age, color, religion, gender, national origin, disability, sexual orientation, veteran status or any other category protected by the laws or regulations in the locations where we operate. California applicants can find a copy of Oportun's CCPA Notice here: https://oportun.com/privacy/california-privacy-notice/. We will never request personal identifiable information (bank, credit card, etc.) before you are hired. We do not charge you for pre-employment fees such as background checks, training, or equipment. If you think you have been a victim of fraud by someone posing as us, please report your experience to the FBI’s Internet Crime Complaint Center (IC3). Show more Show less

Posted 3 days ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Description Amazon Selection and Catalog Systems (ASCS) builds the systems that host and run the comprehensive e-commerce product catalog. We power the online shopping experience for customers worldwide, enabling them to find, discover, and purchase anything they desire. Our scaled, distributed systems process hundreds of millions of updates across billions of products, including physical, digital, and service offerings. You will be part of Catalog Support Programs (CSP) team under Catalog Support Operations (CSO) in ASCS Org. CSP provides program management, technical support, and strategic initiatives to enhance the customer experience, owning the implementation of business logic and configurations for ASCS. We are establishing a new centralized Business Intelligence team to build self-service analytical products for ASCS that provide relevant insights and data deep dives across the business. By leveraging advanced analytics and AI/ML, we will transform catalog data into predictive insights, helping prevent customer issues before they arise. Real-time intelligence will support proactive decision-making, enabling faster, data-driven decisions across the organization and driving long-term growth and an enhanced customer experience. We are looking for a creative and goal-oriented BI Engineer to join our team to harness the full potential of data-driven insights to make informed decisions, identify business opportunities and drive business growth. This role requires an individual with excellent analytical abilities, knowledge of business intelligence solutions, as well as business acumen and the ability to work with various tech/product teams across ASCS. This BI Engineer will support ASCS org by owning complex reporting and automating reporting solutions, and ultimately provide insights and drivers for decision making. You must be a self-starter and be able to learn on the go. You should have excellent written and verbal communication skills to be able to work with business owners to develop and define key business questions, and to build data sets that answer those questions. As a Business Intelligence Engineer in the CSP team, you will be responsible for analyzing petabytes of data to identify business trends and points of customer friction, and developing scalable solutions to enhance customer experience and safety. You will work closely with internal stakeholders to define key performance indicators (KPIs), implement them into dashboards and reports, and present insights in a concise and effective manner. This role will involve collaborating with business and tech leaders within ASCS and cross-functional teams to solve problems, create operational efficiencies, and deliver against high organizational standards. You should be able to apply a breadth of tools, data sources, and analytical techniques to answer a wide range of high-impact business questions and proactively uncover new insights that drive decision-making by senior leadership. As a key member of the CSP team, you will continually raise the bar on both quality and performance. You will bring innovation, a strategic perspective, a passionate voice, and an ability to prioritize and execute on a fast-moving set of priorities, competitive pressures, and operational initiatives. There will be a steep learning curve, adding a fair amount of business skills to the individual. Key job responsibilities Work closely with BIEs, Data Engineers, and Scientists in the team to collaborate effectively with product managers and create scalable solutions for business problems Create program goals and related metrics, track progress, and manage through obstacles to help the team achieve objectives Identify opportunities for improvement or automation in existing data processes and lead the changes using business acumen and data handling skills Ensure best practices on data integrity, design, testing, implementation, documentation, and knowledge sharing Contribute to supplier operations strategy development based on data analysis Lead strategic projects to formalize and scale organizational processes Build and manage weekly, monthly, and quarterly business review metrics Build data reports and dashboards using SQL, Excel, and other tools to improve business efficiency across programs Understand loosely defined or structured problems and provide BI solutions for difficult problems, delivering large-scale BI solutions Provide solutions that drive the team's business decisions and highlight new opportunities Improve code quality and optimize BI processes Demonstrate proficiency in a scripting language, data modeling, data pipeline design, and applying basic statistical methods (e.g., regression) for difficult business problems A day in the life A day in the life of a BIE-II will include: Working closely with cross-functional teams including Product/Program Managers, Software Development Managers, Applied/Research/Data Scientists, and Software Developers Building dashboards, performing root cause analysis, and sharing actionable insights with stakeholders to enable data-informed decision making Leading reporting and analytics initiatives to drive data-informed decision making Designing, developing, and maintaining ETL processes and data visualization dashboards using Amazon QuickSight Transforming complex business requirements into actionable analytics solutions. About The Team This central BIE team within ASCS will be responsible for building a structured analytical data layer, bringing in BI discipline by defining metrics in a standardized way and establishing a single definition of metrics across the catalog ecosystem. They will also identify clear sources of truth for critical data. The team will build and maintain the data pipelines for critical projects tailored to the needs of ASCS teams, leveraging catalog data to provide a unified view of product information. This will support real-time decision-making and empower teams to make data-driven decisions quickly, driving innovation. This team will leverage advanced analytics that can shift us to a proactive, data-driven approach, enabling informed decisions that drive growth and enhance the customer experience. This team will adopt best practices, standardize metrics, and continuously iterate on queries and data sets as they evolve. Automated quality controls and real-time monitoring will ensure consistent data quality across the organization. Basic Qualifications 4+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience in Statistical Analysis packages such as R, SAS and Matlab Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Experience developing and presenting recommendations of new metrics allowing better understanding of the performance of the business Experience writing complex SQL queries Bachelor's degree in BI, finance, engineering, statistics, computer science, mathematics, finance or equivalent quantitative field Experience with scripting languages (e.g., Python, Java, R) and big data technologies/languages (e.g. Spark, Hive, Hadoop, PyTorch, PySpark) to build and maintain data pipelines and ETL processes Demonstrate proficiency in SQL, data analysis, and data visualization tools like Amazon QuickSight to drive data-driven decision making. Experience applying basic statistical methods (e.g. regression, t-test, Chi-squared) as well as exploratory, deterministic, and probabilistic analysis techniques to solve complex business problems. Experience gathering business requirements, using industry standard business intelligence tool(s) to extract data, formulate metrics and build reports. Track record of generating key business insights and collaborating with stakeholders. Strong verbal and written communication skills, with the ability to effectively present data insights to both technical and non-technical audiences, including senior management Preferred Qualifications Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Master's degree in BI, finance, engineering, statistics, computer science, mathematics, finance or equivalent quantitative field Proven track record of conducting large-scale, complex data analysis to support business decision-making in a data warehouse environment Demonstrated ability to translate business needs into data-driven solutions and vice versa Relentless curiosity and drive to explore emerging trends and technologies in the field Knowledge of data modeling and data pipeline design Experience with statistical analysis, co-relation analysis, as well as exploratory, deterministic, and probabilistic analysis techniques Experience in designing and implementing custom reporting systems using automation tools Knowledge of how to improve code quality and optimizes BI processes (e.g. speed, cost, reliability) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka Job ID: A2990532 Show more Show less

Posted 3 days ago

Apply

25.0 years

4 - 7 Lacs

Cochin

On-site

GlassDoor logo

Company Overview Milestone Technologies is a global IT managed services firm that partners with organizations to scale their technology, infrastructure and services to drive specific business outcomes such as digital transformation, innovation, and operational agility. Milestone is focused on building an employee-first, performance-based culture and for over 25 years, we have a demonstrated history of supporting category-defining enterprise clients that are growing ahead of the market. The company specializes in providing solutions across Application Services and Consulting, Digital Product Engineering, Digital Workplace Services, Private Cloud Services, AI/Automation, and ServiceNow. Milestone culture is built to provide a collaborative, inclusive environment that supports employees and empowers them to reach their full potential. Our seasoned professionals deliver services based on Milestone’s best practices and service delivery framework. By leveraging our vast knowledge base to execute initiatives, we deliver both short-term and long-term value to our clients and apply continuous service improvement to deliver transformational benefits to IT. With Intelligent Automation, Milestone helps businesses further accelerate their IT transformation. The result is a sharper focus on business objectives and a dramatic improvement in employee productivity. Through our key technology partnerships and our people-first approach, Milestone continues to deliver industry-leading innovation to our clients. With more than 3,000 employees serving over 200 companies worldwide, we are following our mission of revolutionizing the way IT is deployed. Job Overview In this vital role you will be responsible for the development and implementation of our data strategy. The ideal candidate possesses a strong blend of technical expertise and data-driven problem-solving skills. As a Data Engineer, you will play a crucial role in building, and optimizing our data pipelines and platforms in a SAFE Agile product team. Chip in to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions. Deliver for data pipeline projects from development to deployment, managing, timelines, and risks. Ensure data quality and integrity through meticulous testing and monitoring. Leverage cloud platforms (AWS, Databricks) to build scalable and efficient data solutions. Work closely with product team, and key collaborators to understand data requirements. Enforce to data engineering industry standards and standards. Experience developing in an Agile development environment, and comfortable with Agile terminology and ceremonies. Familiarity with code versioning using GIT and code migration tools. Familiarity with JIRA. Stay up to date with the latest data technologies and trends What we expect of you Basic Qualifications: Doctorate degree OR Master’s degree and 4 to 6 years of Information Systems experience OR Bachelor’s degree and 6 to 8 years of Information Systems experience OR Diploma and 10 to 12 years of Information Systems experience. Demonstrated hands-on experience with cloud platforms (AWS, Azure, GCP) Proficiency in Python, PySpark, SQL. Development knowledge in Databricks. Good analytical and problem-solving skills to address sophisticated data challenges. Preferred Qualifications: Experienced with data modeling Experienced working with ETL orchestration technologies Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and DevOps Familiarity with SQL/NOSQL database Soft Skills: Skilled in breaking down problems, documenting problem statements, and estimating efforts. Effective communication and interpersonal skills to collaborate with multi-functional teams. Excellent analytical and problem solving skills. Strong verbal and written communication skills Ability to work successfully with global teams High degree of initiative and self-motivation. Team-oriented, with a focus on achieving team goals Compensation Estimated Pay Range: Exact compensation and offers of employment are dependent on circumstances of each case and will be determined based on job-related knowledge, skills, experience, licenses or certifications, and location. Our Commitment to Diversity & Inclusion At Milestone we strive to create a workplace that reflects the communities we serve and work with, where we all feel empowered to bring our full, authentic selves to work. We know creating a diverse and inclusive culture that champions equity and belonging is not only the right thing to do for our employees but is also critical to our continued success. Milestone Technologies provides equal employment opportunity for all applicants and employees. All qualified applicants will receive consideration for employment and will not be discriminated against on the basis of race, color, religion, gender, gender identity, marital status, age, disability, veteran status, sexual orientation, national origin, or any other category protected by applicable federal and state law, or local ordinance. Milestone also makes reasonable accommodations for disabled applicants and employees. We welcome the unique background, culture, experiences, knowledge, innovation, self-expression and perspectives you can bring to our global community. Our recruitment team is looking forward to meeting you.

Posted 3 days ago

Apply

5.0 years

4 - 8 Lacs

Hyderābād

On-site

GlassDoor logo

About Company: One of the cloud and data analytics company that empowers businesses to unlock insights and drive innovation through modern data solutions Role: Data Engineer Experience: 5 - 9 Years Location: Chennai & Hyderabad Notice Period: Immediate Joiner - 60 Days Roles and Responsibilities Bachelor's degree in Computer Science, Engineering, or a related field. 5+ years of experience in data engineering or a related role. Proficiency in programming languages such as Python, Java, or Scala, and scripting languages like SQL. Experience with big data technologies and ETL processes. Knowledge of cloud services (AWS, Azure, GCP) and their data-related services. Familiarity with data modeling, data warehousing, and building high-volume data pipelines. Understanding of distributed systems and microservices architecture. Experience with source control tools like Git, and CI/CD practices. Strong problem-solving skills and ability to work independently. Excellent communication and collaboration skills. Mandate Skillset - Python,Pyspark,SQL,Data bricks,AWS

Posted 3 days ago

Apply

12.0 years

1 - 6 Lacs

Hyderābād

On-site

GlassDoor logo

The Windows Data Team is responsible for developing and operating one of the world’s largest data eco-systems: PiB data is being processed, stored and accessed every day. In addition to Azure, Fabric, and Microsoft offerings, the team also utilizes modern open-source technologies such as Spark, Starrocks, and ClickHouse. Thousands of developers in Windows, Bing, Ads, Edge, MSN, etc. are working on top of the data products that the team builds. We’re looking for passionate engineers to join us for the mission of powering Microsoft businesses through data substrate and infusing our data capabilities into the industry. We are looking for a Principal Software Engineering Manager who can lead a team to design, develop, and maintain data pipelines and applications using Spark, SQL, map-reduce, and other technologies on our big data platforms. You will work with a team of data scientists, analysts, and engineers to deliver high-quality data solutions that support our business goals and customer needs. You will also collaborate with other teams across the organization to ensure data quality, security, and compliance. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Responsibilities Lead a team of software developers to develop and optimize data pipelines and applications using Spark, Cosmos, Azure, SQL, and other frameworks. Implement data ingestion, transformation, and processing logic using various data sources and formats. Perform data quality checks, testing, and debugging to ensure data accuracy and reliability. Document and maintain data pipeline specifications, code, and best practices. Research and evaluate new data technologies and tools to improve data performance and scalability. Work with world-class engineer/scientist team on Big Data, Analytics and OLAP/OLTP. Embrace both Microsoft technology and cutting-edge open-source technology. Qualifications Required Qualifications: Bachelor's Degree in Computer Science or related technical field AND 12+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR Master's Degree in Computer Science or related technical field AND 10+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience. 4+ years people management experience. Demonstrate working knowledge of cloud and distributed computing platforms such as Azure or AWS. Strong knowledge and experience with Map Reduce, Spark, Kafka, Synapse, Fabric, or other data processing frameworks. Fluent in English, both written and spoken. Preferred Qualifications: Experience with CosmosDB or other NoSQL databases is a plus. Experience in data engineering, data analysis, or data related fields. Experience with data science and ML tools such as Scikit-learn, R, Azure AI, Pyspark, or similar. Experience with data modeling, data warehousing, and ETL techniques. Experience in designing, developing, and shipping services with secure continuous integration and continuous delivery practices (CI/CD). Relational and/or non-relational (NoSQL) databases. C/C++ and lower-level languages are a plus. #W+Djobs #WindowsIndia #WDXIndia Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.

Posted 3 days ago

Apply

0 years

4 - 7 Lacs

Hyderābād

On-site

GlassDoor logo

Req ID: 327061 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a python,pySpark,ApacheSpark to join our team in Hyderabad, Telangana (IN-TG), India (IN). "At NTT DATA, we know that with the right people on board, anything is possible. The quality, integrity, and commitment of our employees are key factors in our company's growth, market presence and our ability to help our clients stay a step ahead of the competition. By hiring, the best people and helping them grow both professionally and personally, we ensure a bright future for NTT DATA and for the people who work here. NTT DATA Services currently seeks Python Developer to join our team in Hyderabad, India" Design and build ETL solutions with experience in data engineering, data modelling in large-scale in both batch and real-time environments. Skills required: Python, PySpark, Apache Spark, Unix Shell Scripting, GCP, Big query, MongoDB, Kafka event streaming, API development, CI/CD. For software engineering 3: 6+yrs Mandate :Apache spark with python, pyspark, GCP with big query, database Secondary mandate: Abinitio ETL Good to have : Unix shell scripting & Kafka event streaming" About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us. This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here. If you'd like more information on your EEO rights under the law, please click here. For Pay Transparency information, please click here.

Posted 3 days ago

Apply

8.0 years

30 - 38 Lacs

Gurgaon

Remote

GlassDoor logo

Role: AWS Data Engineer Location: Gurugram Mode: Hybrid Type: Permanent Job Description: We are seeking a talented and motivated Data Engineer with requisite years of hands-on experience to join our growing data team. The ideal candidate will have experience working with large datasets, building data pipelines, and utilizing AWS public cloud services to support the design, development, and maintenance of scalable data architectures. This is an excellent opportunity for individuals who are passionate about data engineering and cloud technologies and want to make an impact in a dynamic and innovative environment. Key Responsibilities: Data Pipeline Development: Design, develop, and optimize end-to-end data pipelines for extracting, transforming, and loading (ETL) large volumes of data from diverse sources into data warehouses or lakes. Cloud Infrastructure Management: Implement and manage data processing and storage solutions in AWS (Amazon Web Services) using services like S3, Redshift, Lambda, Glue, Kinesis, and others. Data Modeling: Collaborate with data scientists, analysts, and business stakeholders to define data requirements and design optimal data models for reporting and analysis. Performance Tuning & Optimization: Identify bottlenecks and optimize query performance, pipeline processes, and cloud resources to ensure cost-effective and scalable data workflows. Automation & Scripting: Develop automated data workflows and scripts to improve operational efficiency using Python, SQL, or other scripting languages. Collaboration & Documentation: Work closely with data analysts, data scientists, and other engineering teams to ensure data availability, integrity, and quality. Document processes, architectures, and solutions clearly. Data Quality & Governance: Ensure the accuracy, consistency, and completeness of data. Implement and maintain data governance policies to ensure compliance and security standards are met. Troubleshooting & Support: Provide ongoing support for data pipelines and troubleshoot issues related to data integration, performance, and system reliability. Qualifications: Essential Skills: Experience: 8+ years of professional experience as a Data Engineer, with a strong background in building and optimizing data pipelines and working with large-scale datasets. AWS Experience: Hands-on experience with AWS cloud services, particularly S3, Lambda, Glue, Redshift, RDS, and EC2. ETL Processes: Strong understanding of ETL concepts, tools, and frameworks. Experience with data integration, cleansing, and transformation. Programming Languages: Proficiency in Python, SQL, and other scripting languages (e.g., Bash, Scala, Java). Data Warehousing: Experience with relational and non-relational databases, including data warehousing solutions like AWS Redshift, Snowflake, or similar platforms. Data Modeling: Experience in designing data models, schema design, and data architecture for analytical systems. Version Control & CI/CD: Familiarity with version control tools (e.g., Git) and CI/CD pipelines. Problem-Solving: Strong troubleshooting skills, with an ability to optimize performance and resolve technical issues across the data pipeline. Desirable Skills: Big Data Technologies: Experience with Hadoop, Spark, or other big data technologies. Containerization & Orchestration: Knowledge of Docker, Kubernetes, or similar containerization/orchestration technologies. Data Security: Experience implementing security best practices in the cloud and managing data privacy requirements. Data Streaming: Familiarity with data streaming technologies such as AWS Kinesis or Apache Kafka. Business Intelligence Tools: Experience with BI tools (Tableau, Quicksight) for visualization and reporting. Agile Methodology: Familiarity with Agile development practices and tools (Jira, Trello, etc.) Job Type: Permanent Pay: ₹3,000,000.00 - ₹3,800,000.00 per year Benefits: Work from home Schedule: Day shift Monday to Friday Experience: AWS: 4 years (Required) Data Engineering: 6 years (Required) Python: 3 years (Required) Pyspark/Spark: 3 years (Required) Work Location: In person

Posted 3 days ago

Apply

3.0 years

5 - 8 Lacs

Gurgaon

Remote

GlassDoor logo

Job description About this role Want to elevate your career by being a part of the world's largest asset manager? Do you thrive in an environment that fosters positive relationships and recognizes stellar service? Are analyzing complex problems and identifying solutions your passion? Look no further. BlackRock is currently seeking a candidate to become part of our Global Investment Operations Data Engineering team. We recognize that strength comes from diversity, and will embrace your rare skills, eagerness, and passion while giving you the opportunity to grow professionally and as an individual. We know you want to feel valued every single day and be recognized for your contribution. At BlackRock, we strive to empower our employees and actively engage your involvement in our success. With over USD $11.5 trillion of assets under management, we have an extraordinary responsibility: our technology and services empower millions of investors to save for retirement, pay for college, buy a home and improve their financial well-being. Come join our team and experience what it feels like to be part of an organization that makes a difference. Technology & Operations Technology & Operations(T&O) is responsible for the firm's worldwide operations across all asset classes and geographies. The operational functions are aligned with clients, products, fund structures and our Third-party provider networks. Within T&O, Global Investment Operations (GIO) is responsible for the development of the firm's operating infrastructure to support BlackRock's investment businesses worldwide. GIO spans Trading & Market Documentation, Transaction Management, Collateral Management & Payments, Asset Servicing including Corporate Actions and Cash & Asset Operations, and Securities Lending Operations. GIO provides operational service to BlackRock's Portfolio Managers and Traders globally as well as industry leading service to our end clients. GIO Engineering Working in close partnership with GIO business users and other technology teams throughout Blackrock, GIO Engineering is responsible for developing and providing data and software solutions that support GIO business processes globally. GIO Engineering solutions combine technology, data, and domain expertise to drive exception-based, function-agnostic, service-orientated workflows, data pipelines, and management dashboards. The Role – GIO Engineering Data Lead Work to date has been focused on building out robust data pipelines and lakes relevant to specific business functions, along with associated pools and Tableau / PowerBI dashboards for internal BlackRock clients. The next stage in the project involves Azure / Snowflake integration and commercializing the offering so BlackRock’s 150+ Aladdin clients can leverage the same curated data products and dashboards that are available internally. The successful candidate will contribute to the technical design and delivery of a curated line of data products, related pipelines, and visualizations in collaboration with SMEs across GIO, Technology and Operations, and the Aladdin business. Responsibilities Specifically, we expect the role to involve the following core responsibilities and would expect a successful candidate to be able to demonstrate the following (not in order of priority) Design, develop and maintain a Data Analytics Infrastructure Work with a project manager or drive the project management of team deliverables Work with subject matter experts and users to understand the business and their requirements. Help determine the optimal dataset and structure to deliver on those user requirements Work within a standard data / technology deployment workflow to ensure that all deliverables and enhancements are provided in a disciplined, repeatable, and robust manner Work with team lead to understand and help prioritize the team’s queue of work Automate periodic (daily/weekly/monthly/Quarterly or other) reporting processes to minimize / eliminate associated developer BAU activities. Leverage industry standard and internal tooling whenever possible in order to reduce the amount of custom code that requires maintenance Experience 3+ years of experience in writing ETL, data curation and analytical jobs using Hadoop-based distributed computing technologies: Spark / PySpark, Hive, etc. 3+ years of knowledge and Experience of working with large enterprise databases preferably Cloud bases data bases/ data warehouses like Snowflake on Azure or AWS set-up Knowledge and Experience in working with Data Science / Machine / Gen AI Learning frameworks in Python, Azure/ openAI, meta tec. Knowledge and Experience building reporting and dashboards using BI Tools: Tableau, MS PowerBI, etc. Prior Experience working on Source Code version Management tools like GITHub etc. Prior experience working with and following Agile-based workflow paths and ticket-based development cycles Prior Experience setting-up infrastructure and working on Big Data analytics Strong analytical skills with the ability to collect, organize, analyse, and disseminate significant amounts of information with attention to detail and accuracy Experience working with SMEs / Business Analysts, and working with Stakeholders for sign-off Our benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about. Our hybrid work model BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. About BlackRock At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress. This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com/company/blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law. Job Requisition # R254094

Posted 3 days ago

Apply

5.0 years

0 Lacs

Bengaluru

On-site

GlassDoor logo

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations Responisibilities: Databricks Engineers Requirements: Total Experience: 5-8 years with 4+ years of relevant experience Skills: Proficiency on Databricks platform Strong hands-on experience with Pyspark , SQL, and Python Any cloud - Azure, AWS, GCP Certifications (Any of the following): Databricks Certified Associate Developer for Spark 3.0 - Preferred Databricks Certified Data Engineer Associate Databricks Certified Data Engineer Professional Location: Bangalore Mandatory Skill Sets Databricks, Pyspark, SQL,Python, Any cloud - Azure, AWS, GCP Preferred Skill Sets Related CeCeritfication - •Databricks Certified Associate Developer for Spark 3.0 - Preferred •Databricks Certified Data Engineer Associate •Databricks Certified Data Engineer Professional Year of Experience required 5 to 8 years Education Qualification BE, B.Tech, ME, M,Tech, MBA, MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Engineering, Master of Business Administration, Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Databricks Platform Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis {+ 16 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 3 days ago

Apply

0 years

7 - 8 Lacs

Chennai

On-site

GlassDoor logo

Join us as a Software Engineer - PySpark This is an opportunity for a driven Software Engineer to take on an exciting new career challenge Day-to-day, you'll be engineering and maintaining innovative, customer centric, high performance, secure and robust solutions It’s a chance to hone your existing technical skills and advance your career while building a wide network of stakeholders What you'll do In your new role, you’ll be working within a feature team to engineer software, scripts and tools, as well as liaising with other engineers, architects and business analysts across the platform. You’ll also be: Producing complex and critical software rapidly and of high quality which adds value to the business Working in permanent teams who are responsible for the full life cycle, from initial development, through enhancement and maintenance to replacement or decommissioning Collaborating to optimise our software engineering capability Designing, producing, testing and implementing our working software solutions Working across the life cycle, from requirements analysis and design, through coding to testing, deployment and operations The skills you'll need To take on this role, you’ll need a background in software engineering, software design, and architecture, and an understanding of how your area of expertise supports our customers. You'll need four to seven years of experience in Pyspark, Python, AWS, SQL and Tableau. You'll also need experience in developing and supporting ETL pipelines and tableau reporting. You’ll also need: Experience of working with development and testing tools, bug tracking tools and wikis Experience in multiple programming languages or low code toolsets Experience of DevOps and Agile methodology and associated toolsets A background in solving highly complex, analytical and numerical problems Experience of implementing programming best practice, especially around scalability, automation, virtualisation, optimisation, availability and performance

Posted 3 days ago

Apply

5.0 years

4 - 8 Lacs

Ahmedabad

On-site

GlassDoor logo

Unlock Your Potential With IGNEK Welcome to IGNEK, where we combine innovation and passion! We want our workplace to help you grow professionally and appreciate the special things each person brings. Come with us as we use advanced technology to make a positive difference. At IGNEK, we know our success comes from our team’s talent and hard work. Celebrate Successes Harness Your Skills Experience Growth Together Work, Learn, Celebrate Appreciate Unique Contributions Get Started Culture & Values Our Culture & values guide our actions and define our principles. Growth Learn and grow with us. We’re committed to providing opportunities for you to excel and expand your horizons. Transparency We are very transparent in terms of work, culture and communication to build trust and strong bonding among employees, teams and managers. People First Our success is all about our people. We care about your well-being and value diversity in our inclusive workplace. Be a team Team Work is our strength. Embrace a “Be a Team” mindset, valuing collective success over individual triumphs. Together, we can overcome challenges and reach new heights. Perks & Benefits Competitive flexibility and comprehensive benefits prioritize your well-being. Creative programs, professional development, and a vibrant work-life balance ensure your success is our success. 5 Days Working Festival Celebration Rewards & Benefits Certification Program Skills Improvement Referral Program Friendly Work Culture Training & Development Enterprise Projects Leave Carry Forward Yearly Trip Hybrid Work Fun Activities Indoor | Outdoor Flexible Timing Reliable Growth Team Lunch Stay Happy Opportunity Work Life balance What Makes You Different? BE Authentic Stay true to yourself, it’s what sets you apart BE Proactive Take charge of your work, don’t wait for things to happen BE A Learner Keep an open mind and never stop seeking knowledge BE Professional Approach every task with diligence and integrity BE Innovative Think outside the box and push boundaries BE Passionate Let your enthusiasm light the path to success Senior Data Engineer (AWS Expert) Technology: Data Engineer Job Type: Full Time Job Location: Ahmedabad Experience: 5+ Years Location: Ahmedabad (On-site) Shift Time: 2 PM – 11 PM IST About Us: IGNEK is a fast-growing custom software development company with over a decade of industry experience and a passionate team of 25+ experts. We specialize in crafting end-to-end digital solutions that empower businesses to scale efficiently and stay ahead in an ever-evolving digital world. At IGNEK, we believe in quality, innovation, and a people-first approach to solving real-world challenges through technology. We are looking for a highly skilled and experienced Data Engineer with deep expertise in AWS cloud technologies and strong hands-on experience in backend development, data pipelines, and system design. The ideal candidate will take ownership of delivering robust and scalable solutions while collaborating closely with cross-functional teams and the tech lead. Key Responsibilities: Lead and manage the end-to-end implementation of cloud-native data solutions on AWS. Design, build, and maintain scalable data pipelines (PySpark/Spark) and data lake architectures (Delta Lake 3.0 or similar). Migrate on-premises systems to modern, scalable AWS-based services. end-to-end solutions. Participate in code reviews, agile ceremonies, and documentation. Engineer robust relational databases using Postgres or Oracle with a strong understanding of procedural languages. Collaborate with the tech lead to understand business requirements and deliver practical, scalable solutions. Integrate newly developed features following defined SDLC standards using CI/CD pipelines. Develop orchestration and automation workflows using tools like Apache Airflow. Ensure all solutions comply with security best practices, performance benchmarks, and cloud architecture standards. Monitor, debug, and troubleshoot issues across multiple environments. Stay current with new AWS features, services, and trends to drive continuous platform improvement. Required Skills & Qualifications: 5+ years of professional experience in data engineering and backend development. Strong expertise in Python, Scala, and PySpark. Deep knowledge of AWS services: EC2, S3, Lambda, RDS, Kinesis, IAM, API Gateway, and others. Hands-on experience with Postgres or Oracle, and building relational data stores. Experience with Spark clusters, Delta Lake, Glue Catalogue, and large-scale data processing. Proven track record of end-to-end project delivery and third-party system integrations. Solid understanding of microservices, serverless architectures, and distributed computing. Skilled in Java, Bash scripting, and search tools like Elasticsearch. Proficient in using CI/CD tools (e.g., GitLab, GitHub, AWS CodePipeline). Experience working with Infrastructure as Code (Iac) using Terraform. Hands-on experience with Docker, containerization, and cloud-native deployments. Preferred Qualifications: AWS Certifications (e.g., AWS Certified Solutions Architect or similar). Exposure to Agile/Scrum project methodologies. Familiarity with Kubernetes, advanced networking, and cloud security practices. Experience managing or collaborating with onshore/offshore teams. Preferred Qualifications: Excellent communication and stakeholder management. Strong leadership and problem-solving abilities. Team player with a collaborative mindset. High ownership and accountability in delivering quality outcomes.

Posted 3 days ago

Apply

3.0 - 5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

As a Data Engineer , you are required to: Design, build, and maintain data pipelines that efficiently process and transport data from various sources to storage systems or processing environments while ensuring data integrity, consistency, and accuracy across the entire data pipeline. Integrate data from different systems, often involving data cleaning, transformation (ETL), and validation. Design the structure of databases and data storage systems, including the design of schemas, tables, and relationships between datasets to enable efficient querying. Work closely with data scientists, analysts, and other stakeholders to understand their data needs and ensure that the data is structured in a way that makes it accessible and usable. Stay up-to-date with the latest trends and technologies in the data engineering space, such as new data storage solutions, processing frameworks, and cloud technologies. Evaluate and implement new tools to improve data engineering processes. Qualification : Bachelor's or Master's in Computer Science & Engineering, or equivalent. Professional Degree in Data Science, Engineering is desirable. Experience level : At least 3 - 5 years hands-on experience in Data Engineering, ETL. Desired Knowledge & Experience : Spark: Spark 3.x, RDD/DataFrames/SQL, Batch/Structured Streaming Knowing Spark internals: Catalyst/Tungsten/Photon Databricks: Workflows, SQL Warehouses/Endpoints, DLT, Pipelines, Unity, Autoloader IDE: IntelliJ/Pycharm, Git, Azure Devops, Github Copilot Test: pytest, Great Expectations CI/CD Yaml Azure Pipelines, Continuous Delivery, Acceptance Testing Big Data Design: Lakehouse/Medallion Architecture, Parquet/Delta, Partitioning, Distribution, Data Skew, Compaction Languages: Python/Functional Programming (FP) SQL: TSQL/Spark SQL/HiveQL Storage: Data Lake and Big Data Storage Design additionally it is helpful to know basics of: Data Pipelines: ADF/Synapse Pipelines/Oozie/Airflow Languages: Scala, Java NoSQL: Cosmos, Mongo, Cassandra Cubes: SSAS (ROLAP, HOLAP, MOLAP), AAS, Tabular Model SQL Server: TSQL, Stored Procedures Hadoop: HDInsight/MapReduce/HDFS/YARN/Oozie/Hive/HBase/Ambari/Ranger/Atlas/Kafka Data Catalog: Azure Purview, Apache Atlas, Informatica Required Soft skills & Other Capabilities : Great attention to detail and good analytical abilities. Good planning and organizational skills Collaborative approach to sharing ideas and finding solutions Ability to work independently and also in a global team environment. Show more Show less

Posted 3 days ago

Apply

0 years

5 - 8 Lacs

Indore

On-site

GlassDoor logo

AV-230749 Indore,Madhya Pradesh,India Full-time Permanent Global Business Services DHL INFORMATION SERVICES (INDIA) LLP Your IT Future, Delivered Senior Software Engineer (Azure BI) Open to all PAN India candidates. With a global team of 5800 IT professionals, DHL IT Services connects people and keeps the global economy running by continuously innovating and creating sustainable digital solutions. We work beyond global borders and push boundaries across all dimensions of logistics. You can leave your mark shaping the technology backbone of the biggest logistics company of the world. Our offices in Cyberjaya, Prague, and Chennai have earned #GreatPlaceToWork certification, reflecting our commitment to exceptional employee experiences. Digitalization. Simply delivered. At IT Services, we are passionate about Azure Databricks and PySpark. Our PnP BI Solutions team is continuously expanding. No matter your level of Software Engineer Azure BI proficiency, you can always grow within our diverse environment. #DHL #DHLITServices #GreatPlace #pyspark #azuredatabricks #snowflakedatabase Grow together Timely delivery of DHL packages around the globe in a way that ensures customer data are secure is in the core of what we do. You will provide project deliverables and day-to-day operation support and help investigate and resolve incidents. Sometimes, requirements or issues might get tricky, and this is where your expertise in development or the cooperation on troubleshooting with other IT support teams and specialists will come into play. For any requirements regarding BI use cases in an Azure environment, you are our superhero. The same applies when it comes to production and incidents that need to be fixed. Ready to embark on the journey? Here’s what we are looking for: Practical experience in programming using SQL, PySpark(Python), Azure Databricks and Azure Data Factory Experience in administration and configuration of Databricks Cluster Experience with Snowflake Database Knowledge of Data Vault data modeling (if not: high motivation to learn the modeling approach). Experiences with Streaming APIs like Kafka, CI/CD, XML/JSON, ADLS2 A comprehensive understanding of public cloud platforms, with a preference for Microsoft Azure Proven ability to work in a multi-cultural environment An array of benefits for you: Flexible Work Guidelines. Flexible Compensation Structure. Global Work cultural & opportunities across geographies. Insurance Benefit - Health Insurance for family, parents & in-laws, Term Insurance (Life Cover), Accidental Insurance.

Posted 3 days ago

Apply

12.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

About The Role Grade Level (for internal use): 12 The Team You will be an expert contributor and part of the Rating Organization’s Data Services Product Engineering Team. This team, who has a broad and expert knowledge on Ratings organization’s critical data domains, technology stacks and architectural patterns, fosters knowledge sharing and collaboration that results in a unified strategy. All Data Services team members provide leadership, innovation, timely delivery, and the ability to articulate business value. Be a part of a unique opportunity to build and evolve S&P Ratings next gen analytics platform. Responsibilities Responsibilities: Architect, design, and implement innovative software solutions to enhance S&P Ratings' cloud-based analytics platform. Mentor a team of engineers (as required), fostering a culture of trust, continuous growth, and collaborative problem-solving. Collaborate with business partners to understand requirements, ensuring technical solutions align with business goals. Manage and improve existing software solutions, ensuring high performance and scalability. Participate actively in all Agile scrum ceremonies, contributing to the continuous improvement of team processes. Produce comprehensive technical design documents and conduct technical walkthroughs. Experience & Qualifications Bachelor’s degree in computer science, Information Systems, Engineering, equivalent or more is required Proficient with software development lifecycle (SDLC) methodologies like Agile, Test-driven development Total 12+ years of experience with 8+ years designing enterprise products, modern data stacks and analytics platforms 6+ years of hands-on experience contributing to application architecture & designs, proven software/enterprise integration design patterns and full-stack knowledge including modern distributed front end and back-end technology stacks 5+ years full stack development experience in modern web development technologies, Java/J2EE, UI frameworks like Angular, React, SQL, Oracle, NoSQL Databases like MongoDB Exp. with Delta Lake systems like Databricks using AWS cloud technologies and PySpark is a plus Experience designing transactional/data warehouse/data lake and data integrations with Big data eco system leveraging AWS cloud technologies Thorough understanding of distributed computing Passionate, smart, and articulate developer Quality first mindset with a strong background and experience with developing products for a global audience at scale Excellent analytical thinking, interpersonal, oral and written communication skills with strong ability to influence both IT and business partners Superior knowledge of system architecture, object-oriented design, and design patterns. Good work ethic, self-starter, and results-oriented Excellent communication skills are essential, with strong verbal and writing proficiencies Additional Preferred Qualifications Experience working AWS Experience with SAFe Agile Framework Bachelor's/PG degree in Computer Science, Information Systems or equivalent. Hands-on experience contributing to application architecture & designs, proven software/enterprise integration design principles Ability to prioritize and manage work to critical project timelines in a fast-paced environment Excellent Analytical and communication skills are essential, with strong verbal and writing proficiencies Ability to train and mentor About S&P Global Ratings At S&P Global Ratings, our analyst-driven credit ratings, research, and sustainable finance opinions provide critical insights that are essential to translating complexity into clarity so market participants can uncover opportunities and make decisions with conviction. By bringing transparency to the market through high-quality independent opinions on creditworthiness, we enable growth across a wide variety of organizations, including businesses, governments, and institutions. S&P Global Ratings is a division of S&P Global (NYSE: SPGI). S&P Global is the world’s foremost provider of credit ratings, benchmarks, analytics and workflow solutions in the global capital, commodity and automotive markets. With every one of our offerings, we help many of the world’s leading organizations navigate the economic landscape so they can plan for tomorrow, today. For more information, visit www.spglobal.com/ratings What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Inclusive Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering an inclusive workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and equal opportunity, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. S&P Global has a Securities Disclosure and Trading Policy (“the Policy”) that seeks to mitigate conflicts of interest by monitoring and placing restrictions on personal securities holding and trading. The Policy is designed to promote compliance with global regulations. In some Divisions, pursuant to the Policy’s requirements, candidates at S&P Global may be asked to disclose securities holdings. Some roles may include a trading prohibition and remediation of positions when there is an effective or potential conflict of interest. Employment at S&P Global is contingent upon compliance with the Policy. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.2 - Middle Professional Tier II (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 312491 Posted On: 2025-04-07 Location: Mumbai, Maharashtra, India Show more Show less

Posted 4 days ago

Apply

10.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

About The Role Grade Level (for internal use): 11 The Team You will be an expert contributor and part of the Rating Organization’s Data Services Product Engineering Team. This team, who has a broad and expert knowledge on Ratings organization’s critical data domains, technology stacks and architectural patterns, fosters knowledge sharing and collaboration that results in a unified strategy. All Data Services team members provide leadership, innovation, timely delivery, and the ability to articulate business value. Be a part of a unique opportunity to build and evolve S&P Ratings next gen analytics platform. Responsibilities Responsibilities: Architect, design, and implement innovative software solutions to enhance S&P Ratings' cloud-based analytics platform. Mentor a team of engineers (as required), fostering a culture of trust, continuous growth, and collaborative problem-solving. Collaborate with business partners to understand requirements, ensuring technical solutions align with business goals. Manage and improve existing software solutions, ensuring high performance and scalability. Participate actively in all Agile scrum ceremonies, contributing to the continuous improvement of team processes. Produce comprehensive technical design documents and conduct technical walkthroughs. Experience & Qualifications Bachelor’s degree in computer science, Information Systems, Engineering, equivalent or more is required Proficient with software development lifecycle (SDLC) methodologies like Agile, Test-driven development 10+ years of experience with 4+ years designing/developing enterprise products, modern tech stacks and data platforms 4+ years of hands-on experience contributing to application architecture & designs, proven software/enterprise integration design patterns and full-stack knowledge including modern distributed front end and back-end technology stacks 5+ years full stack development experience in modern web development technologies, Java/J2EE, UI frameworks like Angular, React, SQL, Oracle, NoSQL Databases like MongoDB Experience designing transactional/data warehouse/data lake and data integrations with Big data eco system leveraging AWS cloud technologies Thorough understanding of distributed computing Passionate, smart, and articulate developer Quality first mindset with a strong background and experience with developing products for a global audience at scale Excellent analytical thinking, interpersonal, oral and written communication skills with strong ability to influence both IT and business partners Superior knowledge of system architecture, object-oriented design, and design patterns. Good work ethic, self-starter, and results-oriented Excellent communication skills are essential, with strong verbal and writing proficiencies Exp. with Delta Lake systems like Databricks using AWS cloud technologies and PySpark is a plus Additional Preferred Qualifications Experience working AWS Experience with SAFe Agile Framework Bachelor's/PG degree in Computer Science, Information Systems or equivalent. Hands-on experience contributing to application architecture & designs, proven software/enterprise integration design principles Ability to prioritize and manage work to critical project timelines in a fast-paced environment Excellent Analytical and communication skills are essential, with strong verbal and writing proficiencies Ability to train and mentor About S&P Global Ratings At S&P Global Ratings, our analyst-driven credit ratings, research, and sustainable finance opinions provide critical insights that are essential to translating complexity into clarity so market participants can uncover opportunities and make decisions with conviction. By bringing transparency to the market through high-quality independent opinions on creditworthiness, we enable growth across a wide variety of organizations, including businesses, governments, and institutions. S&P Global Ratings is a division of S&P Global (NYSE: SPGI). S&P Global is the world’s foremost provider of credit ratings, benchmarks, analytics and workflow solutions in the global capital, commodity and automotive markets. With every one of our offerings, we help many of the world’s leading organizations navigate the economic landscape so they can plan for tomorrow, today. For more information, visit www.spglobal.com/ratings What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. S&P Global has a Securities Disclosure and Trading Policy (“the Policy”) that seeks to mitigate conflicts of interest by monitoring and placing restrictions on personal securities holding and trading. The Policy is designed to promote compliance with global regulations. In some Divisions, pursuant to the Policy’s requirements, candidates at S&P Global may be asked to disclose securities holdings. Some roles may include a trading prohibition and remediation of positions when there is an effective or potential conflict of interest. Employment at S&P Global is contingent upon compliance with the Policy. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.2 - Middle Professional Tier II (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 312489 Posted On: 2025-05-14 Location: Mumbai, Maharashtra, India Show more Show less

Posted 4 days ago

Apply

4.0 - 8.0 years

5 - 10 Lacs

Pune

Hybrid

Naukri logo

About Client Hiring for One of the Most Prestigious Multinational Corporations! Job Description Job Title : Azure Data Engineer Qualification : Any Graduate or Above Relevant Experience : 4 to 7Years Required technical skill : Databricks, Python, PySpark , SQL, AZURE Cloud, PowerBI Location : Pune CTC Range : 5 to 10LPA Notice period : immediate/ serving notice period Shift Timing : NA Mode of Interview : Virtual Sonali jena Staffing analyst - IT recruiter Black and White Business solutions PVT Ltd Bangalore, Karnataka, INDIA sonali.jena@blackwhite.in I www.blackwhite.in +91 8067432474

Posted 4 days ago

Apply

5.0 - 9.0 years

25 - 35 Lacs

Kochi, Chennai, Bengaluru

Work from Office

Naukri logo

Experience Data Engineer ((Python, PySpark, ADB,ADF, Azure, Snowflake) Data science can also apply

Posted 4 days ago

Apply

3.0 years

0 Lacs

India

On-site

Linkedin logo

Note: Please do not apply if your salary expectations are higher than the provided Salary Range and experience less than 3 years. If you have experience with Travel Industry and worked on Hotel, Car Rental or Ferry Booking before then we can negotiate the package. Company Description Our company is involved in promoting Greece for the last 25 years through travel sites visited from all around the world with 10 million visitors per year such www.greeka.com, www.ferriesingreece.com etc Through the websites, we provide a range of travel services for a seamless holiday experience such online car rental reservations, ferry tickets, transfers, tours etc….. Role Description We are seeking a highly skilled Artificial Intelligence / Machine Learning Engineer to join our dynamic team. You will work closely with our development team and QAs to deliver cutting-edge solutions that improve our candidate screening and employee onboarding processes. Major Responsibilities & Job Requirements include: • Develop and implement NLP/LLM Models. • Minimum of 3-4 years of experience as an AI/ML Developer or similar role, with demonstrable expertise in computer vision techniques. • Develop and implement AI models using Python, TensorFlow, and PyTorch. • Proven experience in computer vision, including fine-tuning OCR models (e.g., Tesseract, Layoutlmv3 , EasyOCR, PaddleOCR, or custom-trained models). • Strong understanding and hands-on experience with RAG (Retrieval-Augmented Generation) architectures and pipelines for building intelligent Q&A, document summarization, and search systems. • Experience working with LangChain, LLM agents, and chaining tools to build modular and dynamic LLM workflows. • Familiarity with agent-based frameworks and orchestration of multi-step reasoning with tools, APIs, and external data sources. • Familiarity with Cloud AI Solutions, such as IBM, Azure, Google & AWS. • Work on natural language processing (NLP) tasks and create language models (LLM) for various applications. • Design and maintain SQL databases for storing and retrieving data efficiently. • Utilize machine learning and deep learning techniques to build predictive models. • Collaborate with cross-functional teams to integrate AI solutions into existing systems. • Stay updated with the latest advancements in AI technologies, including ChatGPT, Gemini, Claude, and Big Data solutions. • Write clean, maintainable, and efficient code when required. • Handle large datasets and perform big data analysis to extract valuable insights. • Fine-tune pre-trained LLMs using specific type of data and ensure optimal performance. • Proficiency in cloud services from Amazon AWS • Extract and parse text from CVs, application forms, and job descriptions using advanced NLP techniques such as Word2Vec, BERT, and GPT-NER. • Develop similarity functions and matching algorithms to align candidate skills with job requirements. • Experience with microservices, Flask, FastAPI, Node.js. • Expertise in Spark, PySpark for big data processing. • Knowledge of advanced techniques such as SVD/PCA, LSTM, NeuralProphet. • Apply debiasing techniques to ensure fairness and accuracy in the ML pipeline. • Experience in coordinating with clients to understand their needs and delivering AI solutions that meet their requirements. Qualifications : • Bachelor's or Master’s degree in Computer Science, Data Science, Artificial Intelligence, or a related field. • In-depth knowledge of NLP techniques and libraries, including Word2Vec, BERT, GPT, and others. • Experience with database technologies and vector representation of data. • Familiarity with similarity functions and distance metrics used in matching algorithms. • Ability to design and implement custom ontologies and classification models. • Excellent problem-solving skills and attention to detail. • Strong communication and collaboration skills. Show more Show less

Posted 4 days ago

Apply

5.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Job Title: Senior Data Engineer – Data Quality, Ingestion & API Development Mandatory skill set - Python, Pyspark, AWS, Glue , Lambda, CI CD Total experience - 8+ Relevant experience - 8+ Work Location - Trivandrum /Kochi Candidates from Kerala and Tamil Nadu prefer more who are ready to relocate to above work locations. Candidates must be having an experience in lead role related to Data Engineer Job Overview We are seeking an experienced Senior Data Engineer to lead the development of a scalable data ingestion framework while ensuring high data quality and validation. The successful candidate will also be responsible for designing and implementing robust APIs for seamless data integration. This role is ideal for someone with deep expertise in building and managing big data pipelines using modern AWS-based technologies, and who is passionate about driving quality and efficiency in data processing systems. Key Responsibilities • Data Ingestion Framework: o Design & Development: Architect, develop, and maintain an end-to-end data ingestion framework that efficiently extracts, transforms, and loads data from diverse sources. o Framework Optimization: Use AWS services such as AWS Glue, Lambda, EMR, ECS , EC2 and Step Functions to build highly scalable, resilient, and automated data pipelines. • Data Quality & Validation: o Validation Processes: Develop and implement automated data quality checks, validation routines, and error-handling mechanisms to ensure the accuracy and integrity of incoming data. o Monitoring & Reporting: Establish comprehensive monitoring, logging, and alerting systems to proactively identify and resolve data quality issues. • API Development: o Design & Implementation: Architect and develop secure, high-performance APIs to enable seamless integration of data services with external applications and internal systems. o Documentation & Best Practices: Create thorough API documentation and establish standards for API security, versioning, and performance optimization. • Collaboration & Agile Practices: o Cross-Functional Communication: Work closely with business stakeholders, data scientists, and operations teams to understand requirements and translate them into technical solutions. o Agile Development: Participate in sprint planning, code reviews, and agile ceremonies, while contributing to continuous improvement initiatives and CI/CD pipeline development (using tools like GitLab). Required Qualifications • Experience & Technical Skills: o Professional Background: At least 5 years of relevant experience in data engineering with a strong emphasis on analytical platform development. o Programming Skills: Proficiency in Python and/or PySpark, SQL for developing ETL processes and handling large-scale data manipulation. o AWS Expertise: Extensive experience using AWS services including AWS Glue, Lambda, Step Functions, and S3 to build and manage data ingestion frameworks. o Data Platforms: Familiarity with big data systems (e.g., AWS EMR, Apache Spark, Apache Iceberg) and databases like DynamoDB, Aurora, Postgres, or Redshift. o API Development: Proven experience in designing and implementing RESTful APIs and integrating them with external and internal systems. o CI/CD & Agile: Hands-on experience with CI/CD pipelines (preferably with GitLab) and Agile development methodologies. • Soft Skills: o Strong problem-solving abilities and attention to detail. o Excellent communication and interpersonal skills with the ability to work independently and collaboratively. o Capacity to quickly learn and adapt to new technologies and evolving business requirements. Preferred Qualifications • Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field. • Experience with additional AWS services such as Kinesis, Firehose, and SQS. • Familiarity with data lakehouse architectures and modern data quality frameworks. • Prior experience in a role that required proactive data quality management and API- driven integrations in complex, multi-cluster environments. Candidate those who are Interested please drop your resume to: gigin.raj@greenbayit.com MOB NO - 8943011666 Show more Show less

Posted 4 days ago

Apply

10.0 years

0 Lacs

Kerala, India

On-site

Linkedin logo

🚀 We’re Hiring: Senior Data Engineer | Immediate Joiner 📍 Location: Kochi / Trivandrum | 💼 Experience: 10+ Years 🌙 Shift: US Overlapping Hours (till 10 PM IST) We are looking for a Senior Data Engineer / Associate Architect who thrives on solving complex data problems and leading scalable data infrastructure development. Must-Have Skillset: ✅ Python, PySpark ✅ AWS Glue, Lambda, Step Functions ✅ CI/CD (GitLab), API Development ✅ 5+ years hands-on AWS expertise ✅ Strong understanding of Data Quality, Validation & Monitoring Role Highlights: 🔹 Build & optimize AWS-based data ingestion frameworks 🔹 Implement high-performance APIs 🔹 Drive data quality & integrity 🔹 Collaborate across teams in Agile environments Nice to Have: ➕ Experience with Kinesis, Firehose, SQS ➕ Familiarity with Lakehouse architectures Show more Show less

Posted 4 days ago

Apply

Exploring PySpark Jobs in India

PySpark, a powerful data processing framework built on top of Apache Spark and Python, is in high demand in the job market in India. With the increasing need for big data processing and analysis, companies are actively seeking professionals with PySpark skills to join their teams. If you are a job seeker looking to excel in the field of big data and analytics, exploring PySpark jobs in India could be a great career move.

Top Hiring Locations in India

Here are 5 major cities in India where companies are actively hiring for PySpark roles: 1. Bangalore 2. Pune 3. Hyderabad 4. Mumbai 5. Delhi

Average Salary Range

The estimated salary range for PySpark professionals in India varies based on experience levels. Entry-level positions can expect to earn around INR 6-8 lakhs per annum, while experienced professionals can earn upwards of INR 15 lakhs per annum.

Career Path

In the field of PySpark, a typical career progression may look like this: 1. Junior Developer 2. Data Engineer 3. Senior Developer 4. Tech Lead 5. Data Architect

Related Skills

In addition to PySpark, professionals in this field are often expected to have or develop skills in: - Python programming - Apache Spark - Big data technologies (Hadoop, Hive, etc.) - SQL - Data visualization tools (Tableau, Power BI)

Interview Questions

Here are 25 interview questions you may encounter when applying for PySpark roles:

  • Explain what PySpark is and its main features (basic)
  • What are the advantages of using PySpark over other big data processing frameworks? (medium)
  • How do you handle missing or null values in PySpark? (medium)
  • What is RDD in PySpark? (basic)
  • What is a DataFrame in PySpark and how is it different from an RDD? (medium)
  • How can you optimize performance in PySpark jobs? (advanced)
  • Explain the difference between map and flatMap transformations in PySpark (basic)
  • What is the role of a SparkContext in PySpark? (basic)
  • How do you handle schema inference in PySpark? (medium)
  • What is a SparkSession in PySpark? (basic)
  • How do you join DataFrames in PySpark? (medium)
  • Explain the concept of partitioning in PySpark (medium)
  • What is a UDF in PySpark? (medium)
  • How do you cache DataFrames in PySpark for optimization? (medium)
  • Explain the concept of lazy evaluation in PySpark (medium)
  • How do you handle skewed data in PySpark? (advanced)
  • What is checkpointing in PySpark and how does it help in fault tolerance? (advanced)
  • How do you tune the performance of a PySpark application? (advanced)
  • Explain the use of Accumulators in PySpark (advanced)
  • How do you handle broadcast variables in PySpark? (advanced)
  • What are the different data sources supported by PySpark? (medium)
  • How can you run PySpark on a cluster? (medium)
  • What is the purpose of the PySpark MLlib library? (medium)
  • How do you handle serialization and deserialization in PySpark? (advanced)
  • What are the best practices for deploying PySpark applications in production? (advanced)

Closing Remark

As you explore PySpark jobs in India, remember to prepare thoroughly for interviews and showcase your expertise confidently. With the right skills and knowledge, you can excel in this field and advance your career in the world of big data and analytics. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies