Home
Jobs

3895 Pyspark Jobs - Page 18

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

We deliver the world’s most complex projects Work as part of a collaborative and inclusive team Enjoy a varied & challenging role Building on our past. Ready for the future Worley is a global professional services company of energy, chemicals and resources experts headquartered in Australia. Right now, we’re bridging two worlds as we accelerate to more sustainable energy sources, while helping our customers provide the energy, chemicals and resources that society needs now. We partner with our customers to deliver projects and create value over the life of their portfolio of assets. We solve complex problems by finding integrated data-centric solutions from the first stages of consulting and engineering to installation and commissioning, to the last stages of decommissioning and remediation. Join us and help drive innovation and sustainability in our projects. The Role Develop and implement data pipelines for ingesting and collecting data from various sources into a centralized data platform. Develop and maintain ETL jobs using AWS Glue services to process and transform data at scale. Optimize and troubleshoot AWS Glue jobs for performance and reliability. Utilize Python and PySpark to efficiently handle large volumes of data during the ingestion process. Collaborate with data architects to design and implement data models that support business requirements. Create and maintain ETL processes using Airflow, Python and PySpark to move and transform data between different systems. Implement monitoring solutions to track data pipeline performance and proactively identify and address issues. Manage and optimize databases, both SQL and NoSQL, to support data storage and retrieval needs. Familiarity with Infrastructure as Code (IaC) tools like Terraform, AWS CDK and others. Proficiency in event-driven integrations, batch-based and API-led data integrations. Proficiency in CICD pipelines such as Azure DevOps, AWS pipelines or Github Actions. About You To be considered for this role it is envisaged you will possess the following attributes: Technical and Industry Experience: Independent Integration Developer with over 5+ years of experience in developing and delivering integration projects in an agile or waterfall-based project environment. Proficiency in Python, PySpark and SQL programming language for data manipulation and pipeline development Hands-on experience with AWS Glue, Airflow, Dynamo DB, Redshift, S3 buckets, Event-Grid, and other AWS services Experience implementing CI/CD pipelines, including data testing practices. Proficient in Swagger, JSON, XML, SOAP and REST based web service development Behaviors Required: Driven by our values and purpose in everything we do. Visible, active, hands on approach to help teams be successful. Strong proactive planning ability. Optimistic, energetic, problem solver, ability to see long term business outcomes. Collaborative, ability to listen, compromise to make progress. Stronger together mindset, with a focus on innovation & creation of tangible / realized value. Challenge status quo. Education – Qualifications, Accreditation, Training: Degree in Computer Science and/or related fields AWS Data Engineering certifications desirable Moving forward together We’re committed to building a diverse, inclusive and respectful workplace where everyone feels they belong, can bring themselves, and are heard. We provide equal employment opportunities to all qualified applicants and employees without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by law. We want our people to be energized and empowered to drive sustainable impact. So, our focus is on a values-inspired culture that unlocks brilliance through belonging, connection and innovation. And we're not just talking about it; we're doing it. We're reskilling our people, leveraging transferable skills, and supporting the transition of our workforce to become experts in today's low carbon energy infrastructure and technology. Whatever your ambition, there’s a path for you here. And there’s no barrier to your potential career success. Join us to broaden your horizons, explore diverse opportunities, and be part of delivering sustainable change. Company Worley Primary Location IND-MM-Mumbai Job Digital Solutions Schedule Full-time Employment Type Employee Job Level Experienced Job Posting Jun 4, 2025 Unposting Date Jul 4, 2025 Reporting Manager Title Director Show more Show less

Posted 3 days ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Snowflake Data Warehouse, PySpark, Core Banking Good to have skills : AWS BigData Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs, while also troubleshooting any issues that arise in the data flow and processing stages. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Mentor junior team members to enhance their skills and knowledge in data engineering. - Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Snowflake Data Warehouse, Core Banking, PySpark. - Good To Have Skills: Experience with AWS BigData. - Strong understanding of data modeling and database design principles. - Experience with data integration tools and ETL processes. - Familiarity with data governance and data quality frameworks. Additional Information: - The candidate should have minimum 5 years of experience in Snowflake Data Warehouse. - This position is based in Pune. - A 15 years full time education is required. Show more Show less

Posted 3 days ago

Apply

5.0 years

0 Lacs

Bhubaneswar, Odisha, India

On-site

Linkedin logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : PySpark Good to have skills : Microsoft Azure Databricks, Microsoft Azure Data Services Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. Your typical day will involve collaborating with teams to develop and enhance applications to align with business needs. Roles & Responsibilities: - Expected to be an SME - Collaborate and manage the team to perform - Responsible for team decisions - Engage with multiple teams and contribute on key decisions - Provide solutions to problems for their immediate team and across multiple teams - Lead the application development process - Implement best practices for application design and development - Conduct code reviews and ensure code quality Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark - Good To Have Skills: Experience with Microsoft Azure Databricks, Microsoft Azure Data Services - Strong understanding of distributed computing and data processing - Experience in building scalable and efficient data pipelines - Proficient in data manipulation and transformation using PySpark Additional Information: - The candidate should have a minimum of 5 years of experience in PySpark - This position is based at our Bhubaneswar office - A 15 years full-time education is required Show more Show less

Posted 3 days ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Note: Please apply only if you have 6 years or more of relevant experience in Data Science (excluding internship) Comfortable working 5-days a week from Gurugram, Haryana Are an immediate joiner or currently serving your notice period About Eucloid At Eucloid, innovation meets impact. As a leader in AI and Data Science, we create solutions that redefine industries—from Hi-tech and D2C to Healthcare and SaaS. With partnerships with giants like Databricks, Google Cloud, and Adobe, we’re pushing boundaries and building next-gen technology. Join our talented team of engineers, scientists, and visionaries from top institutes like IITs, IIMs, and NITs. At Eucloid, growth is a promise, and your work will drive transformative results for Fortune 100 clients. What You’ll Do As a GenAI Engineer, you will play a pivotal role in designing and deploying data-driven and GenAI-powered solutions. Your responsibilities will include: Analyzing large sets of structured and unstructured data to extract meaningful insights and drive business impact. Designing and developing Machine Learning models, including regression, time series forecasting, clustering, classification, and NLP. Building, fine-tuning, and deploying Large Language Models (LLMs) such as GPT, BERT, or LLaMA for tasks like text summarization, generation, and classification. Working with Hugging Face Transformers, LangChain, and vector databases (e.g., FAISS, Pinecone) to develop scalable GenAI pipelines. Applying prompt engineering techniques and Reinforcement Learning with Human Feedback (RLHF) to optimize GenAI applications. Building and deploying models using Python, R, TensorFlow, PyTorch, and Scikit-learn within production-ready environments like Flask, Azure Functions, and AWS Lambda. Developing and maintaining scalable data pipelines in collaboration with data engineers. Implementing solutions on cloud platforms like AWS, Azure, or GCP for scalable and high-performance AI/ML applications. Enhancing BI and visualization tools such as Tableau, Power BI, Qlik, and Plotly to communicate data insights effectively. Collaborating with stakeholders to translate business challenges into GenAI/data science problems and actionable solutions. Staying updated on emerging GenAI and AI/ML technologies and incorporating best practices into projects. What Makes You a Fit Academic Background: Bachelor’s or Master’s degree in Data Science, Computer Science, Mathematics, Statistics, or a related field. Technical Expertise: 6+ years of hands-on experience in applying Machine Learning techniques (clustering, classification, regression, NLP). Strong proficiency in Python and SQL, with experience in frameworks like Flask or Django. Expertise in Big Data environments using PySpark. Deep understanding of ML frameworks such as TensorFlow, PyTorch, and Scikit-learn. Hands-on experience with Hugging Face Transformers, OpenAI API, or similar GenAI libraries. Knowledge of vector databases and retrieval-augmented generation (RAG) techniques. Proficiency in cloud-based AI/ML deployment on AWS, Azure, or GCP. Experience in Docker and containerization for ML model deployment. Knowledge of code management methodologies and best practices for implementing scalable ML/GenAI solutions. Extra Skills: Experience in Deep Learning and Reinforcement Learning. Hands-on experience with NLP, Text Mining, and LLM architectures. Experience in business intelligence and data visualization tools (Tableau, Power BI, Qlik). Experience with prompt engineering and fine-tuning LLMs for production use cases. Ability to effectively communicate insights and translate technical work into business value. Why You’ll Love It Here Innovate with the Best Tech: Work on groundbreaking projects using AI, GenAI, LLMs, and massive-scale data platforms. Tackle challenges that push the boundaries of innovation. Impact Industry Giants: Deliver business-critical solutions for Fortune 100 clients across Hi-tech, D2C, Healthcare, SaaS, and Retail. Partner with platforms like Databricks, Google Cloud, and Adobe to create high-impact products. Collaborate with a World-Class Team: Join exceptional professionals from IITs, IIMs, NITs, and global leaders like Walmart, Amazon, Accenture, and ZS. Learn, grow, and lead in a team that values expertise and collaboration. Accelerate Your Growth: Access our Centres of Excellence to upskill and work on industry-leading innovations. Your professional development is a top priority. Work in a Culture of Excellence: Be part of a dynamic workplace that fosters creativity, teamwork, and a passion for building transformative solutions. Your contributions will be recognized and celebrated. About Our Leadership Anuj Gupta – Former Amazon leader with over 22 years of experience in building and managing large engineering teams. (B.Tech, IIT Delhi; MBA, ISB Hyderabad). Raghvendra Kushwah – Business consulting expert with 21+ years at Accenture and Cognizant (B.Tech, IIT Delhi; MBA, IIM Lucknow). Key Benefits Competitive salary and performance-based bonus. Comprehensive benefits package, including health insurance and flexible work hours. Opportunities for professional development and career growth. Location: Gurugram Submit your resume to saurabh.bhaumik@eucloid.com with the subject line “ Application: GenAI Engineer. ” Eucloid is an equal-opportunity employer. We celebrate diversity and are committed to creating an inclusive environment. Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

India

On-site

Linkedin logo

Company Description ThreatXIntel is a startup cyber security company specializing in cloud security, web and mobile security testing, cloud security assessment, and DevSecOps. We offer customized, affordable solutions tailored to meet the specific needs of businesses of all sizes. Our proactive approach to security involves continuous monitoring and testing to identify vulnerabilities before they can be exploited. Role Description We are looking for a skilled freelance Data Engineer with expertise in PySpark and AWS data services , particularly S3 and Redshift . Familiarity with Salesforce data integration is a plus. This role focuses on building scalable data pipelines and supporting analytics use cases in a cloud-native environment. Key Responsibilities Design and develop ETL/ELT data pipelines using PySpark for large-scale data processing Ingest, transform, and store data across AWS S3 (data lake) and Amazon Redshift (data warehouse) Integrate data from Salesforce into the cloud data ecosystem for analysis Optimize data workflows for performance and cost-efficiency Write efficient code and queries for structured and unstructured data Collaborate with analysts and stakeholders to deliver clean, usable datasets Required Skills Strong hands-on experience with PySpark Proficient in AWS services, especially S3 and Redshift Basic working knowledge of Salesforce data structure or API Ability to write complex SQL for data transformation and reporting Familiarity with version control and Agile collaboration tools Good communication and documentation skills Show more Show less

Posted 3 days ago

Apply

3.0 years

0 Lacs

India

Remote

Linkedin logo

Title: Data Engineer Location: Remote Employment type: Full Time with BayOne We’re looking for a skilled and motivated Data Engineer to join our growing team and help us build scalable data pipelines, optimize data platforms, and enable real-time analytics. What You'll Do Design, develop, and maintain robust data pipelines using tools like Databricks, PySpark, SQL, Fabric, and Azure Data Factory Collaborate with data scientists, analysts, and business teams to ensure data is accessible, clean, and actionable Work on modern data lakehouse architectures and contribute to data governance and quality frameworks Tech Stack Azure | Databricks | PySpark | SQL What We’re Looking For 3+ years experience in data engineering or analytics engineering Hands-on with cloud data platforms and large-scale data processing Strong problem-solving mindset and a passion for clean, efficient data design Job Description: Min 3 years of experience in modern data engineering/data warehousing/data lakes technologies on cloud platforms like Azure, AWS, GCP, Data Bricks etc. Azure experience is preferred over other cloud platforms. 5 years of proven experience with SQL, schema design and dimensional data modelling Solid knowledge of data warehouse best practices, development standards and methodologies Experience with ETL/ELT tools like ADF, Informatica, Talend etc., and data warehousing technologies like Azure Synapse, Microsoft Fabric, Azure SQL, Amazon redshift, Snowflake, Google Big Query etc. Strong experience with big data tools (Databricks, Spark etc..) and programming skills in PySpark and Spark SQL. Be an independent self-learner with “let’s get this done” approach and ability to work in Fast paced and Dynamic environment. Excellent communication and teamwork abilities. Nice-to-Have Skills: Event Hub, IOT Hub, Azure Stream Analytics, Azure Analysis Service, Cosmo DB knowledge. SAP ECC /S/4 and Hana knowledge. Intermediate knowledge on Power BI Azure DevOps and CI/CD deployments, Cloud migration methodologies and processes BayOne is an Equal Opportunity Employer and does not discriminate against any employee or applicant for employment because of race, color, sex, age, religion, sexual orientation, gender identity, status as a veteran, and basis of disability or any federal, state, or local protected class. This job posting represents the general duties and requirements necessary to perform this position and is not an exhaustive statement of all responsibilities, duties, and skills required. Management reserves the right to revise or alter this job description. Show more Show less

Posted 3 days ago

Apply

10.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Senior Data Engineer – AWS Expert (Lead/Associate Architect Level) 📍 Location: Trivandrum or Kochi (On-site/Hybrid) Experience:10+ Years (Relevant exp in AWS- 5+ is mandatory) About the Role We’re hiring a Senior Data Engineer with deep expertise in AWS services, strong hands-on experience in data ingestion, quality, and API development, and the leadership skills to operate at a Lead or Associate Architect level. This role demands a high level of technical ownership, especially in architecting scalable, reliable data pipelines and robust API integrations. You’ll collaborate with cross-functional teams across geographies, so a willingness to work night shifts overlapping with US hours (till 10 AM IST) is essential. Key Responsibilities Data Engineering Leadership: Design and implement scalable, end-to-end data ingestion and processing frameworks using AWS. AWS Architecture: Hands-on development using AWS Glue, Lambda, EMR, Step Functions, S3, ECS, and other AWS services. Data Quality & Validation: Build automated checks, validation layers, and monitoring for ensuring data accuracy and integrity. API Development: Develop secure, high-performance REST APIs for internal and external data integration. Collaboration: Work closely with product, analytics, and DevOps teams across geographies. Participate in Agile ceremonies and CI/CD pipelines using tools like GitLab. What We’re Looking For Experience: 5+ years in Data Engineering, with a proven track record in designing scalable AWS-based data systems. Technical Mastery: Proficient in Python/PySpark, SQL, and building big data pipelines. AWS Expert: Deep knowledge of core AWS services used for data ingestion and processing. API Expertise: Experience designing and managing scalable APIs. Leadership Qualities: Ability to work independently, lead discussions, and drive technical decisions. Preferred Qualifications Experience with Kinesis, Firehose, SQS, and data lakehouse architectures. Exposure to tools like Apache Iceberg, Aurora, Redshift, and DynamoDB. Prior experience in distributed, multi-cluster environments. Working Hours US Time Zone Overlap Required: Must be available to work night shifts overlapping with US hours (up to 10:00 AM IST). Work Location Trivandrum or Kochi – On-site or hybrid options available for the right candidate. Show more Show less

Posted 3 days ago

Apply

3.0 - 8.0 years

12 - 20 Lacs

Noida, Gurugram, Mumbai (All Areas)

Work from Office

Naukri logo

3+ years of experience in data engineering or backend development with a focus on highly scalable data systems Experience B2B SaaS AI company ideally in a high-growth or startup designing and scaling cloud-based data platforms (AWS, GCP, Azure).

Posted 3 days ago

Apply

0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

We are seeking a skilled Data Engineer to join our dynamic team. The ideal candidate will have expertise in Python, SQL, Tableau, and PySpark, with additional exposure to SAS, banking domain knowledge, and version control tools like GIT and BitBucket. The candidate will be responsible for developing and optimizing data pipelines, ensuring efficient data processing, and supporting business intelligence initiatives. Key Responsibilities Design, build, and maintain data pipelines using Python and PySpark Develop and optimize SQL queries for data extraction and transformation Create interactive dashboards and visualizations using Tableau Implement data models to support analytics and business needs Collaborate with cross-functional teams to understand data requirements Ensure data integrity, security, and governance across platforms Utilize version control tools like GIT and BitBucket for code management Leverage SAS and banking domain knowledge to improve data insights Required Skills Strong proficiency in Python and PySpark for data processing Advanced SQL skills for data manipulation and querying Experience with Tableau for data visualization and reporting Familiarity with database systems and data warehousing concepts Preferred Skills Knowledge of SAS and its applications in data analysis Experience working in the banking domain Understanding of version control systems, specifically GIT and BitBucket Knowledge of pandas, numpy, statsmodels, scikit-learn, matplotlib, PySpark , SASPy Qualifications Bachelor's/Master's degree in Computer Science, Data Science, or a related field Excellent problem-solving and analytical skills Ability to work collaboratively in a fast-paced environment Show more Show less

Posted 3 days ago

Apply

6.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

About The Role Grade Level (for internal use): 10 Position Summary Our proprietary software-as-a-service helps automotive dealerships and sales teams better understand and predict exactly which customers are ready to buy, the reasons why, and the key offers and incentives most likely to close the sale. Its micro-marketing engine then delivers the right message at the right time to those customers, ensuring higher conversion rates and a stronger ROI. What You'll Do You will be part of our Data Platform & Product Insights data engineering team. As part of this agile team, you will work in our cloud native environment to Build & support data ingestion and processing pipelines in cloud. This will entail extraction, load and transformation of ‘big data’ from a wide variety of sources, both batch & streaming, using latest data frameworks and technologies Partner with product team to assemble large, complex data sets that meet functional and non-functional business requirements, ensure build out of Data Dictionaries/Data Catalogue and detailed documentation and knowledge around these data assets, metrics and KPIs. Warehouse this data, build data marts, data aggregations, metrics, KPIs, business logic that leads to actionable insights into our product efficacy, marketing platform, customer behaviour, retention etc. Build real-time monitoring dashboards and alerting systems. Coach and mentor other team members. Who You Are 6+ years of experience in Big Data and Data Engineering. Strong knowledge of advanced SQL, data warehousing concepts and DataMart designing. Have strong programming skills in SQL, Python/ PySpark etc. Experience in design and development of data pipeline, ETL/ELT process on-premises/cloud. Experience in one of the Cloud providers – GCP, Azure, AWS. Experience with relational SQL and NoSQL databases, including Postgres and MongoDB. Experience workflow management tools: Airflow, AWS data pipeline, Google Cloud Composer etc. Experience with Distributed Versioning Control environments such as GIT, Azure DevOps Building Docker images and fetch/promote and deploy to Production. Integrate Docker container orchestration framework using Kubernetes by creating pods, config Maps, deployments using terraform. Should be able to convert business queries into technical documentation. Strong problem solving and communication skills. Bachelors or an advanced degree in Computer Science or related engineering discipline. Good to have some exposure to Exposure to any Business Intelligence (BI) tools like Tableau, Dundas, Power BI etc. Agile software development methodologies. Working in multi-functional, multi-location teams Grade: 10 Location: Gurugram Hybrid Model: twice a week work from office Shift Time: 12 pm to 9 pm IST What You'll Love About Us – Do ask us about these! Total Rewards. Monetary, beneficial and developmental rewards! Work Life Balance. You can't do a good job if your job is all you do! Prepare for the Future. Academy – we are all learners; we are all teachers! Employee Assistance Program. Confidential and Professional Counselling and Consulting. Diversity & Inclusion. HeForShe! Internal Mobility. Grow with us! About AutomotiveMastermind Who we are: Founded in 2012, automotiveMastermind is a leading provider of predictive analytics and marketing automation solutions for the automotive industry and believes that technology can transform data, revealing key customer insights to accurately predict automotive sales. Through its proprietary automated sales and marketing platform, Mastermind, the company empowers dealers to close more deals by predicting future buyers and consistently marketing to them. automotiveMastermind is headquartered in New York City. For more information, visit automotivemastermind.com. At automotiveMastermind, we thrive on high energy at high speed. We’re an organization in hyper-growth mode and have a fast-paced culture to match. Our highly engaged teams feel passionately about both our product and our people. This passion is what continues to motivate and challenge our teams to be best-in-class. Our cultural values of “Drive” and “Help” have been at the core of what we do, and how we have built our culture through the years. This cultural framework inspires a passion for success while collaborating to win. What We Do Through our proprietary automated sales and marketing platform, Mastermind, we empower dealers to close more deals by predicting future buyers and consistently marketing to them. In short, we help automotive dealerships generate success in their loyalty, service, and conquest portfolios through a combination of turnkey predictive analytics, proactive marketing, and dedicated consultative services. What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

I am thrilled to share an exciting opportunity with one of our esteemed clients! 🚀 Join me in exploring new horizons and unlocking potential. If you're ready for a challenge and growth,. Exp: 7+yrs Location: Chennai, Hyderabad Immediate joiner only, WFO Mandatory skills: SQL, Python, Pyspark, Databricks (strong in core databricks), AWS (AWS is mandate) JD: Manage and optimize cloud infrastructure (AWS, Databricks) for data storage, processing, and compute resources, ensuring seamless data operations. Implement data quality checks, validation rules, and transformation logic to ensure the accuracy, consistency, and reliability of data. Integrate data from multiple sources, ensuring data is accurately transformed and stored in optimal formats (e.g., Delta Lake, Redshift, S3). Automate data workflows using tools like Airflow, Databricks APIs, and other orchestration technologies to streamline data ingestion, processing, and reporting tasks. Regards R Usha usha@livecjobs.com Show more Show less

Posted 4 days ago

Apply

6.0 - 8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Key Attributes: Adaptability & Agility Thrive in a fast-paced, ever-evolving environment with shifting priorities. Demonstrated ability to quickly learn and integrate new technologies and frameworks. Strong problem-solving mindset with the ability to juggle multiple priorities effectively. Core Responsibilities Design, develop, test, and maintain robust Python applications and data pipelines using Python/Pyspark. Define and implement smart data pipelines from RDBMS to Graph Databases . Build and expose APIs using AWS Lambda and ECS-based microservices . Collaborate with cross-functional teams to define, design, and deliver new features. Write clean, efficient, and scalable code following best practices. Troubleshoot, debug, and optimise applications for performance and reliability. Contribute to the setup and maintenance of CI/CD pipelines and deployment workflows if required. Ensure security, compliance, and observability across all development activities. All you need is... Required Skills & Experience Expert-level proficiency in Python with a strong grasp of Object oriented & functional programming. Solid experience with SQL and graph databases (e.g., Neo4j, Amazon Neptune). Hands-on experience with cloud platforms – AWS and/or Azure is a must. Proficiency in PySpark or similar data ingestion and processing frameworks. Familiarity with DevOps tools such as Docker, Kubernetes, Jenkins, and Git. Strong understanding of CI/CD, version control, and agile development practices. Excellent communication and collaboration skills. Desirable Skills Experience with Agentic AI, machine learning, or LLM-based systems. Familiarity with Apache Iceberg or similar modern data lakehouse formats. Knowledge of Infrastructure as Code (IaC) tools like Terraform or Ansible. Understanding of microservices architecture and distributed systems. Exposure to observability tools (e.g., Prometheus, Grafana, ELK stack). Experience working in Agile/Scrum environments. Minimum Qualifications 6 to 8 years of hands-on experience in Python development and data engineering. Demonstrated success in delivering production-grade software and scalable data solutions. Show more Show less

Posted 4 days ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description Job Description As a Technical Lead, you will be working on both offshore and onsite client projects. You will be working in projects which will involve Oracle BI Applications/ FAW or OBIEE/OAC/ ODI Implementation, You will be Interacting with client to understand and gather requirement You will be responsible for technical design, development, and system/integration testing using oracle methodologies Desired Profile End –to-end ODI, OAC and Oracle BI Applications/ FAW implementation experience Expert knowledge of BI Applications/ FAW including basic and advanced configurations with Oracle eBS suite/ Fusion as the source system Expert knowledge of OBIEE/OAC RPD design and reports design Expert knowledge ETL(ODI) design/ OCI DI/ OCI Dataflow Mandatory to have 1 of these skills : PLSQL/ BI Publisher/BI Apps Good to have EDQ, Pyspark skills Architectural Solution Definition Any Industry Standard Certifications will be a plus Good knowledge in Oracle database and development Experience in the database application. Creativity, Personal Drive, Influencing and Negotiating, Problem Solving Building Effective Relationships, Customer Focus, Effective Communication, Coaching Ready to travel as and when required by project Experience 8-12 yrs of Data warehousing and Business Intelligence project experience Having 4-6 years of project experience on BI Applications/ FAW and OBIEE/OAC/ ODI/ OCI DI with at least 2 complete lifecycle implementations 4-6 yrs of specialized BI Applications and OBIEE/OAC/ ODI/ OCI DI customization and solution architecture experience. Worked on Financial, SCM or HR Analytics recently in implementation and configuration Career Level - IC3 Responsibilities Job Description As a Technical Lead, you will be working on both offshore and onsite client projects. You will be working in projects which will involve Oracle BI Applications/ FAW or OBIEE/OAC/ ODI Implementation, You will be Interacting with client to understand and gather requirement You will be responsible for technical design, development, and system/integration testing using oracle methodologies Desired Profile End –to-end ODI, OAC and Oracle BI Applications/ FAW implementation experience Expert knowledge of BI Applications/ FAW including basic and advanced configurations with Oracle eBS suite/ Fusion as the source system Expert knowledge of OBIEE/OAC RPD design and reports design Expert knowledge ETL(ODI) design/ OCI DI/ OCI Dataflow Mandatory to have 1 of these skills : PLSQL/ BI Publisher/BI Apps Good to have EDQ, Pyspark skills Architectural Solution Definition Any Industry Standard Certifications will be a plus Good knowledge in Oracle database and development Experience in the database application. Creativity, Personal Drive, Influencing and Negotiating, Problem Solving Building Effective Relationships, Customer Focus, Effective Communication, Coaching Ready to travel as and when required by project Experience 8-12 yrs of Data warehousing and Business Intelligence project experience Having 4-6 years of project experience on BI Applications/ FAW and OBIEE/OAC/ ODI/ OCI DI with at least 2 complete lifecycle implementations 4-6 yrs of specialized BI Applications and OBIEE/OAC/ ODI/ OCI DI customization and solution architecture experience. Worked on Financial, SCM or HR Analytics recently in implementation and configuration About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less

Posted 4 days ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Title Analytics Engineer II Corporate Title Assistant Vice President Corporate Band E Location Bangalore Job Description Group Data Services (GDS) leads Swiss Re's ambition to be a truly data-driven risk knowledge company. GDS bring expertise and experience covering all aspects of data and analytics to enable Swiss Re in its vision to make the world more resilient. The Data Science & AI unit delivers state of the art AI/GenAI solutions for the Swiss Re Group, working in close collaboration with the business units. The Opportunity To extend the existing AI-powered search and conversational solution, we are seeking a motivated Data/Software Engineer with focus on data to build and own scalable platforms, end-to-end solutions and services around information retrieval, text mining, big data/document analytics, LLM application Key Responsibilities Design and develop scalable data engineering solutions for AI search and conversational product Collaborate & iterate – Pair with engineers, data scientists, and business experts to release improvements at pace Proactively identify potentials for continuous improvement of existing solutions Apply and implement Agile Scrum and DevOps best practices while driving the project forward Build up sustainable relationships with key business and IT stakeholders in order to become a trusted partner in the field of information retrieval and big data/document analytics Ensure timely customer communication and coordination of follow-up activities About You To excel in this role, you have: 5+ years of professional experience in data engineering/software development in an enterprise environment, with strong programming skills in Python/Java or equivalent Knowledge about some of the following technologies: Spark/PySpark, RDBMS, API programming, CI/CD Understanding of complex enterprise software landscape- Distributed systems, big data, security Excellent verbal and written English skills Experience working with or in Agile teams and understanding of Agile practices Bachelor's/Master's degree in computer science or equivalent How You Work Team player with a ‘can do’ attitude Self-directed & proactive: you surface problems and drive them to resolution Calm under pressure; balance delivery speed with development quality Customer-obsessed; hold the bar high for usability, accessibility, and reliability AI-curious problem-solver who creates efficient backend solutions with the help of AI Bonus Points Understanding of (Re-) insurance and/or financial services information needs Experience with Palantir Platform Experience in LLM, RAG and related engineering About Swiss Re Swiss Re is one of the world’s leading providers of reinsurance, insurance and other forms of insurance-based risk transfer, working to make the world more resilient. We anticipate and manage a wide variety of risks, from natural catastrophes and climate change to cybercrime. We cover both Property & Casualty and Life & Health. Combining experience with creative thinking and cutting-edge expertise, we create new opportunities and solutions for our clients. This is possible thanks to the collaboration of more than 14,000 employees across the world. Our success depends on our ability to build an inclusive culture encouraging fresh perspectives and innovative thinking. We embrace a workplace where everyone has equal opportunities to thrive and develop professionally regardless of their age, gender, race, ethnicity, gender identity and/or expression, sexual orientation, physical or mental ability, skillset, thought or other characteristics. In our inclusive and flexible environment everyone can bring their authentic selves to work and their passion for sustainability. If you are an experienced professional returning to the workforce after a career break, we encourage you to apply for open positions that match your skills and experience. Keywords Reference Code: 133963 Show more Show less

Posted 4 days ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Description Job Description As a Technical Lead, you will be working on both offshore and onsite client projects. You will be working in projects which will involve Oracle BI Applications/ FAW or OBIEE/OAC/ ODI Implementation, You will be Interacting with client to understand and gather requirement You will be responsible for technical design, development, and system/integration testing using oracle methodologies Desired Profile End –to-end ODI, OAC and Oracle BI Applications/ FAW implementation experience Expert knowledge of BI Applications/ FAW including basic and advanced configurations with Oracle eBS suite/ Fusion as the source system Expert knowledge of OBIEE/OAC RPD design and reports design Expert knowledge ETL(ODI) design/ OCI DI/ OCI Dataflow Mandatory to have 1 of these skills : PLSQL/ BI Publisher/BI Apps Good to have EDQ, Pyspark skills Architectural Solution Definition Any Industry Standard Certifications will be a plus Good knowledge in Oracle database and development Experience in the database application. Creativity, Personal Drive, Influencing and Negotiating, Problem Solving Building Effective Relationships, Customer Focus, Effective Communication, Coaching Ready to travel as and when required by project Experience 8-12 yrs of Data warehousing and Business Intelligence project experience Having 4-6 years of project experience on BI Applications/ FAW and OBIEE/OAC/ ODI/ OCI DI with at least 2 complete lifecycle implementations 4-6 yrs of specialized BI Applications and OBIEE/OAC/ ODI/ OCI DI customization and solution architecture experience. Worked on Financial, SCM or HR Analytics recently in implementation and configuration Career Level - IC3 Responsibilities Job Description As a Technical Lead, you will be working on both offshore and onsite client projects. You will be working in projects which will involve Oracle BI Applications/ FAW or OBIEE/OAC/ ODI Implementation, You will be Interacting with client to understand and gather requirement You will be responsible for technical design, development, and system/integration testing using oracle methodologies Desired Profile End –to-end ODI, OAC and Oracle BI Applications/ FAW implementation experience Expert knowledge of BI Applications/ FAW including basic and advanced configurations with Oracle eBS suite/ Fusion as the source system Expert knowledge of OBIEE/OAC RPD design and reports design Expert knowledge ETL(ODI) design/ OCI DI/ OCI Dataflow Mandatory to have 1 of these skills : PLSQL/ BI Publisher/BI Apps Good to have EDQ, Pyspark skills Architectural Solution Definition Any Industry Standard Certifications will be a plus Good knowledge in Oracle database and development Experience in the database application. Creativity, Personal Drive, Influencing and Negotiating, Problem Solving Building Effective Relationships, Customer Focus, Effective Communication, Coaching Ready to travel as and when required by project Experience 8-12 yrs of Data warehousing and Business Intelligence project experience Having 4-6 years of project experience on BI Applications/ FAW and OBIEE/OAC/ ODI/ OCI DI with at least 2 complete lifecycle implementations 4-6 yrs of specialized BI Applications and OBIEE/OAC/ ODI/ OCI DI customization and solution architecture experience. Worked on Financial, SCM or HR Analytics recently in implementation and configuration About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less

Posted 4 days ago

Apply

4.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Job Description Job Description As a Technical Lead, you will be working on both offshore and onsite client projects. You will be working in projects which will involve Oracle BI Applications/ FAW or OBIEE/OAC/ ODI Implementation, You will be Interacting with client to understand and gather requirement You will be responsible for technical design, development, and system/integration testing using oracle methodologies Desired Profile End –to-end ODI, OAC and Oracle BI Applications/ FAW implementation experience Expert knowledge of BI Applications/ FAW including basic and advanced configurations with Oracle eBS suite/ Fusion as the source system Expert knowledge of OBIEE/OAC RPD design and reports design Expert knowledge ETL(ODI) design/ OCI DI/ OCI Dataflow Mandatory to have 1 of these skills : PLSQL/ BI Publisher/BI Apps Good to have EDQ, Pyspark skills Architectural Solution Definition Any Industry Standard Certifications will be a plus Good knowledge in Oracle database and development Experience in the database application. Creativity, Personal Drive, Influencing and Negotiating, Problem Solving Building Effective Relationships, Customer Focus, Effective Communication, Coaching Ready to travel as and when required by project Experience 8-12 yrs of Data warehousing and Business Intelligence project experience Having 4-6 years of project experience on BI Applications/ FAW and OBIEE/OAC/ ODI/ OCI DI with at least 2 complete lifecycle implementations 4-6 yrs of specialized BI Applications and OBIEE/OAC/ ODI/ OCI DI customization and solution architecture experience. Worked on Financial, SCM or HR Analytics recently in implementation and configuration Career Level - IC3 Responsibilities Job Description As a Technical Lead, you will be working on both offshore and onsite client projects. You will be working in projects which will involve Oracle BI Applications/ FAW or OBIEE/OAC/ ODI Implementation, You will be Interacting with client to understand and gather requirement You will be responsible for technical design, development, and system/integration testing using oracle methodologies Desired Profile End –to-end ODI, OAC and Oracle BI Applications/ FAW implementation experience Expert knowledge of BI Applications/ FAW including basic and advanced configurations with Oracle eBS suite/ Fusion as the source system Expert knowledge of OBIEE/OAC RPD design and reports design Expert knowledge ETL(ODI) design/ OCI DI/ OCI Dataflow Mandatory to have 1 of these skills : PLSQL/ BI Publisher/BI Apps Good to have EDQ, Pyspark skills Architectural Solution Definition Any Industry Standard Certifications will be a plus Good knowledge in Oracle database and development Experience in the database application. Creativity, Personal Drive, Influencing and Negotiating, Problem Solving Building Effective Relationships, Customer Focus, Effective Communication, Coaching Ready to travel as and when required by project Experience 8-12 yrs of Data warehousing and Business Intelligence project experience Having 4-6 years of project experience on BI Applications/ FAW and OBIEE/OAC/ ODI/ OCI DI with at least 2 complete lifecycle implementations 4-6 yrs of specialized BI Applications and OBIEE/OAC/ ODI/ OCI DI customization and solution architecture experience. Worked on Financial, SCM or HR Analytics recently in implementation and configuration About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less

Posted 4 days ago

Apply

150.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Description OUR IMPACT Platform Solutions, Goldman Sachs delivers a broad range of financial services across investment banking, securities, investment management and consumer banking to a large and diversified client base that includes corporations, financial institutions, governments, and individuals. Clients embed innovative financial products and solutions that create customer-centered experiences, powered by Goldman Sachs. The businesses of Platform Solutions share a developer-centric mindset and cloud-native platforms. We utilize financial products, including credit cards, installment financing and high yield savings accounts into the ecosystems of major brands to serve millions of loyal customers. We make it easy to offer a range of financial products powered by an API-first platform with the backing of Goldman Sachs' 150+ years of financial expertise. We offer a customized deployment approach while providing a modern, agile technology stack all supported by our long history of financial expertise, risk management and regulatory knowledge. In Platform Solutions (PS), We Power Clients With Innovative And Customer-centered Financial Products. We Bring The Best Qualities Of a Technology Player And Combine That With The Best Attributes Of a Large Bank. PS Is Comprised Of Four Main Businesses, Underpinned By Engineering, Operations And Risk Management Transaction Banking, a cash management and payments platform for clients building a corporate treasury system Enterprise Partnerships, consumer financial products that companies embed directly within their ecosystems to better serve their end customers ETF Accelerator, a platform for clients to launch, list and manage exchange-traded funds Join us on our journey to deliver financial products and platforms that prioritize the customer and developer experience. Your Impact This position will play a key role on the First Line Risk and Control team, supporting Consumer Monitoring & Testing and driving the implementation of horizontal Consumer risk programs. This individual will be responsible for executing risk-based testing, liasing with product, operations, compliance, and legal teams to ensure regulatory adherence. The role will also provide the opportunity to drive development and enhancement of risk and control programs Execute testing and monitoring of regulatory, policy and process compliance Gather and synthesize data to determine root causes and trends related to testing failures Propose effective and efficient methods to enhance testing and sampling strategies (including automation) to ensure the most effective risk detection, analyses and control solutions Proactively identify potential business risks, process deficiencies and improvement opportunities and make recommendations for additional controls and corrective action to enhance the efficiency and effectiveness of risk mitigation processes Maintain effective communication with stakeholders and support teams in remediation of testing errors; assist with implementation of corrective actions related to testing fails and non-compliance with policies and procedures Identify continuous improvement opportunities to meet changing requirements, driving maximum visibility to the executive audience Work closely with enterprise risk teams to ensure business line risks are being shared and rolled up to firm-wide risk summaries Your Skills 2-4 years of testing, audit, or compliance experience in consumer financial services Bachelor’s degree or equivalent military experience Knowledge of applicable U.S. federal and state consumer lending laws and regulations as well as industry association standards, including, among others, Truth in Lending Act (Reg Z), Equal Credit Opportunity Act (Reg B), Fair Credit Reporting Act (Reg V), UDAAP Understanding of test automation framework like data driven, hybrid driven etc Knowledge of testing concepts, methodologies, and technologies Genuine excitement and passion for leading root cause analysis, troubleshooting technical process failures and implementing fixes to operationalize a process Analytical, critical thinking and problem solving skills Highly motivated self-starter with strong organizational skills, attention to detail, and the ability to remain organized in a fast-paced environment Interpersonal, and relationship management skills Integrity, ethical standards, and sound judgment; ability to exercise discretion with respect to sensitive information Ability to summarize observations and present in a clear, concise manner to peers, managers and senior Consumer Compliance management Quickly grasp complex concepts, including global business and regulatory matters Confidence in expressing a point of view with management Plus: CPA, Audit experience, CRCM, proficiency in Aquadata studio, Snowflake, Splunk, Excel macros,Tableau, Hadoop/PySpark/Spark/Python/R, CPA, Audit experience, CRCM About Goldman Sachs At Goldman Sachs, we commit our people, capital and ideas to help our clients, shareholders and the communities we serve to grow. Founded in 1869, we are a leading global investment banking, securities and investment management firm. Headquartered in New York, we maintain offices around the world. We believe who you are makes you better at what you do. We're committed to fostering and advancing diversity and inclusion in our own workplace and beyond by ensuring every individual within our firm has a number of opportunities to grow professionally and personally, from our training and development opportunities and firmwide networks to benefits, wellness and personal finance offerings and mindfulness programs. Learn more about our culture, benefits, and people at GS.com/careers. We’re committed to finding reasonable accommodations for candidates with special needs or disabilities during our recruiting process. Learn more: https://www.goldmansachs.com/careers/footer/disability-statement.html Show more Show less

Posted 4 days ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Who We Are With Citi’s Analytics & Information Management (AIM) group, you will do meaningful work from Day 1. Our collaborative and respectful culture lets people grow and make a difference in one of the world’s leading Financial Services Organizations. The purpose of the group is to use Citi’s data assets to analyze information and create actionable intelligence for our business leaders. We value what makes you unique so that you have opportunity to shine. You also get the opportunity to work with the best minds and top leaders in the analytics space. What The Role Is The role of Officer – Data Management/Information Analyst will be part of AIM team based out of Bengaluru, India supporting the Global Workforce Optimization unit. The GWFO team supports capacity planning across the organization. The primary responsibility of the GWFO team is to forecast future demand (Inbound /Outbound Call Volume, back-office process volume, etc.) and the capacity required to fulfill the demand. It also includes forecasting short-term demand (daily/ hourly) and scheduling the agent accordingly. The GWFO team is also responsible for collaborating with multiple stakeholders and coming up with optimal hiring plans to ensure adequate capacity as well as optimize the operational budget. In this role, you will work along a highly talented team of analyst to build data solutions to track key business metrics and support the workforce optimization activities. You will be responsible for understanding and mapping out the data landscape for current and new businesses that are onboarded by GWFO and design the data store and pipes needed to provide for capacity planning, reporting and analytics, as well as real-time monitoring. You would work very closely with GWFO’s technology partners in getting these solutions implemented in a compliant environment. Who You Are Data Driven. A proven track record of enabling decision making and problem solving with data. Conceptual thinking skills must be complemented by a strong quantitative orientation and data driven approach. Excellent Problem Solver. You are a critical thinker, able to ask right questions, make sense of a situation and come up with intelligent solutions. Strong Team Player. You build a trusted relationships with your team members. You are ready to offer unconditional assistance, will listen, share knowledge, and are always ready to provide support as needed. Strong Communicator. You can communicate verbally and through written communication with clarity and can structure and present your work to your partners & leadership. Clear Results Orientation . You display a keen focus on achieving both short and long-term goals and have experience driving and executing an agenda in a demanding and fast-paced environment, with an eye on risks & controls. Innovative. You are always challenging yourself and your team to find better and faster ways of doing things. What You Do Data Exploration. Understand underlying data sources by dwelling into multiple platforms scattered across the organization. You do what it takes to gather information by connecting with people across business teams and technology. Build Data Assets. You have a strong data design background and are capable of developing and building multi-dimensional data assets and pipes that captures abundant information about various line of business. Process & Controls Orientation. You develop strong processes, and indestructible controls to address risk and seek to propagate that culture to become the core value of your team. Dashboarding and Visualization. You develop insightful, visually compelling and engaging dashboards that supports decision making and drive adoption. Flawless Execution. You manage and sequence delivery of reporting and data needs by actively managing requests against available bandwidth and identify opportunities for improved productivity. Be an Enabler. You support your team and help them accomplish their goals with empathy. You act as a facilitator and remove blockers and create a positive atmosphere for then to be innovative and productive at work. Qualifications Must have 3+ years of work experience largely in the Data Management / engineering space. Must have expertise working with SQL. Must have expertise working with PySpark/Python for data extraction and deep dive activities Prior experience in an Operations role is desirable Good to have working experience on MS Office Package (Excel, Outlook, PowerPoint, etc. with VBA) and/or BI Visualization tools like Tableau is a plus. ------------------------------------------------------ Job Family Group: Decision Management ------------------------------------------------------ Job Family: Data/Information Management ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less

Posted 4 days ago

Apply

3.0 - 5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Title Data Engineer Job Description Data Engineer !! Hello, we’re IG Group. We’re a global, FTSE 250-listed company made up of a collection of progressive fintech brands in the world of online trading and investing. The best part? We’ve snapped up many awards for our top-class platforms, forward-thinking products, and incredible employee experiences. Who We’re Looking For You’re curious about things like the client experience, the rapid developments in tech, and the complex world of fintech regulation. You’re also a confident, creative thinker with a knack for innovating. We know that you know every problem has a solution. Here, you can try new ideas, and lead the way in creating inspiring experiences for our clients and everyone around you. We don’t fit the corporate stereotype. If you want to work for a traditional, suit-and-tie corporate that just gives you a pay cheque at the end of the month, we might not be for you. But, if you have that IG Group energy and you can stand behind what we believe in, let’s raise the bar together. About The Team We are looking for a Data Engineer for our team in our Bangalore office. The role, as well as the projects in which you will participate on, is crucial for the entire IG. Data Engineering is responsible to collect data from various sources and generate insights for our business stakeholders. As a Data engineer you will be responsible to the delivery of our projects and participate in the whole project life cycle (development and delivery) applying Agile best practices and you will also ensure good quality engineering . You will be working other technical teams members to build ingestion pipeline, build a shared company-wide Data platform in GCP as well as supporting and evolving our wide range of services in the cloud You will be owning the development and support of our applications which also include our out-of-ours support rota. The Skills You'll Need You will be someone who can demonstrate: Good understanding of IT development life cycle with focus on quality and continuous delivery and integration 3 - 5 years of experience in Python, Data processing - (pandas/pyspark), & SQL Good experience Cloud - GCP Good communications skills being able to communicate technical concepts to non-technical audience. Proven experience in working on Agile environments. Experience on working in data related projects from data ingestion to analytics and reporting. Good understanding of Big Data and distributed computes framework such as Spark for both batch and streaming workloads Familiar with kafka and different data formats AVRO/Parquet/ORC/Json. It Would Be Great If You Have Experience On GitLab Containerisation (Nomad or Kubernetes). How You’ll Grow When you join IG Group, we want you to have more than a job – we want you to have a career. And you can. If you spot an opportunity, we want you to chase it. Stretch yourself, challenge your self-beliefs and go for the things you dream of. With internal and external learning opportunities and the tools to help you skyrocket to success, we’ll support you all the way. And these opportunities truly are endless because we have some bold targets. We plan to expand our global presence, increase revenue growth, and ultimately deliver the world’s best trading experience. We’d love to have you along for the ride. The Perks It really is more than a job. We’ll recognise your talent and make sure that you can still have a life – at work, and outside of it. Networks, committees, awards, sports and social clubs, mentorships, volunteering opportunities, extra time off… the list goes on. Matched giving for your fundraising activity. Flexible working hours and work-from-home opportunities. Performance-related bonuses. Insurance and medical plans. Career-focused technical and leadership training. Contribution to gym memberships and more. A day off on your birthday. Two days’ volunteering leaves per year. Where You’ll Work We follow a hybrid working model; we reckon it’s the best of both worlds. This model also feeds into our secret ingredients for innovation: diversity, flexibility, and close connection. Plus, you’ll be welcomed into a diverse and inclusive workforce with a lot of creative energy. Ask our employees what their favourite thing is about working at IG, and you’ll hear an echo of ‘our culture’! That’s because you can come to work as your authentic self. The things that make you, you – like your ethnicity, sexual orientation, faith, age, gender identity/expression or physical capacity – can bring a fresh perspective or new skill to our business. That’s why we welcome people from various walks of life; and anyone who wants to help us realise our vision and strategy. So, if you’re keen to connect with our values, and lead the charge on innovation, you know what to do. Apply! Number of openings 1 Show more Show less

Posted 4 days ago

Apply

5.0 - 10.0 years

15 - 25 Lacs

Hyderabad/Secunderabad, Bangalore/Bengaluru, Delhi / NCR

Hybrid

Naukri logo

Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose the relentless pursuit of a world that works better for people – we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead Consultant-Data Engineer, AWS+Python, Spark, Kafka for ETL! Responsibilities Develop, deploy, and manage ETL pipelines using AWS services, Python, Spark, and Kafka. Integrate structured and unstructured data from various data sources into data lakes and data warehouses. Design and deploy scalable, highly available, and fault-tolerant AWS data processes using AWS data services (Glue, Lambda, Step, Redshift) Monitor and optimize the performance of cloud resources to ensure efficient utilization and cost-effectiveness. Implement and maintain security measures to protect data and systems within the AWS environment, including IAM policies, security groups, and encryption mechanisms. Migrate the application data from legacy databases to Cloud based solutions (Redshift, DynamoDB, etc) for high availability with low cost Develop application programs using Big Data technologies like Apache Hadoop, Apache Spark, etc with appropriate cloud-based services like Amazon AWS, etc. Build data pipelines by building ETL processes (Extract-Transform-Load) Implement backup, disaster recovery, and business continuity strategies for cloud-based applications and data. Responsible for analysing business and functional requirements which involves a review of existing system configurations and operating methodologies as well as understanding evolving business needs Analyse requirements/User stories at the business meetings and strategize the impact of requirements on different platforms/applications, convert the business requirements into technical requirements Participating in design reviews to provide input on functional requirements, product designs, schedules and/or potential problems Understand current application infrastructure and suggest Cloud based solutions which reduces operational cost, requires minimal maintenance but provides high availability with improved security Perform unit testing on the modified software to ensure that the new functionality is working as expected while existing functionalities continue to work in the same way Coordinate with release management, other supporting teams to deploy changes in production environment Qualifications we seek in you! Minimum Qualifications Experience in designing, implementing data pipelines, build data applications, data migration on AWS Strong experience of implementing data lake using AWS services like Glue, Lambda, Step, Redshift Experience of Databricks will be added advantage Strong experience in Python and SQL Proven expertise in AWS services such as S3, Lambda, Glue, EMR, and Redshift. Advanced programming skills in Python for data processing and automation. Hands-on experience with Apache Spark for large-scale data processing. Experience with Apache Kafka for real-time data streaming and event processing. Proficiency in SQL for data querying and transformation. Strong understanding of security principles and best practices for cloud-based environments. Experience with monitoring tools and implementing proactive measures to ensure system availability and performance. Excellent problem-solving skills and ability to troubleshoot complex issues in a distributed, cloud-based environment. Strong communication and collaboration skills to work effectively with cross-functional teams. Preferred Qualifications/ Skills Master’s Degree-Computer Science, Electronics, Electrical. AWS Data Engineering & Cloud certifications, Databricks certifications Experience with multiple data integration technologies and cloud platforms Knowledge of Change & Incident Management process Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.

Posted 4 days ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

We are looking for an immediate joiner and experienced Big Data Developer with a strong background in Kafka, PySpark, Python/Scala, Spark, SQL, and the Hadoop ecosystem. The ideal candidate should have over 5 years of experience and be ready to join immediately. This role requires hands-on expertise in big data technologies and the ability to design and implement robust data processing solutions. Responsibilities Design, develop, and maintain scalable data processing pipelines using Kafka, PySpark, Python/Scala, and Spark. Work extensively with the Kafka and Hadoop ecosystem, including HDFS, Hive, and other related technologies. Write efficient SQL queries for data extraction, transformation, and analysis. Implement and manage Kafka streams for real-time data processing. Utilize scheduling tools to automate data workflows and processes. Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions. Ensure data quality and integrity by implementing robust data validation processes. Optimize existing data processes for performance and scalability. Requirements Experience with GCP. Knowledge of data warehousing concepts and best practices. Familiarity with machine learning and data analysis tools. Understanding of data governance and compliance standards. This job was posted by Arun Kumar K from krtrimaIQ Cognitive Solutions. Show more Show less

Posted 4 days ago

Apply

5.0 years

0 Lacs

India

On-site

Linkedin logo

About Oportun Oportun (Nasdaq: OPRT) is a mission-driven fintech that puts its 2.0 million members' financial goals within reach. With intelligent borrowing, savings, and budgeting capabilities, Oportun empowers members with the confidence to build a better financial future. Since inception, Oportun has provided more than $16.6 billion in responsible and affordable credit, saved its members more than $2.4 billion in interest and fees, and helped its members save an average of more than $1,800 annually. Oportun has been certified as a Community Development Financial Institution (CDFI) since 2009. WORKING AT OPORTUN Working at Oportun means enjoying a differentiated experience of being part of a team that fosters a diverse, equitable and inclusive culture where we all feel a sense of belonging and are encouraged to share our perspectives. This inclusive culture is directly connected to our organization's performance and ability to fulfill our mission of delivering affordable credit to those left out of the financial mainstream. We celebrate and nurture our inclusive culture through our employee resource groups. Position Overview As a Sr. Data Engineer at Oportun, you will be a key member of our team, responsible for designing, developing, and maintaining sophisticated software / data platforms in achieving the charter of the engineering group. Your mastery of a technical domain enables you to take up business problems and solve them with a technical solution. With your depth of expertise and leadership abilities, you will actively contribute to architectural decisions, mentor junior engineers, and collaborate closely with cross-functional teams to deliver high-quality, scalable software solutions that advance our impact in the market. This is a role where you will have the opportunity to take up responsibility in leading the technology effort – from technical requirements gathering to final successful delivery of the product - for large initiatives (cross-functional and multi-month-long projects). Responsibilities Data Architecture and Design: Lead the design and implementation of scalable, efficient, and robust data architectures to meet business needs and analytical requirements. Collaborate with stakeholders to understand data requirements, build subject matter expertise, and define optimal data models and structures. Data Pipeline Development And Optimization Design and develop data pipelines, ETL processes, and data integration solutions for ingesting, processing, and transforming large volumes of structured and unstructured data. Optimize data pipelines for performance, reliability, and scalability. Database Management And Optimization Oversee the management and maintenance of databases, data warehouses, and data lakes to ensure high performance, data integrity, and security. Implement and manage ETL processes for efficient data loading and retrieval. Data Quality And Governance Establish and enforce data quality standards, validation rules, and data governance practices to ensure data accuracy, consistency, and compliance with regulations. Drive initiatives to improve data quality and documentation of data assets. Mentorship And Leadership Provide technical leadership and mentorship to junior team members, assisting in their skill development and growth. Lead and participate in code reviews, ensuring best practices and high-quality code. Collaboration And Stakeholder Management Collaborate with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand their data needs and deliver solutions that meet those needs. Communicate effectively with non-technical stakeholders to translate technical concepts into actionable insights and business value. Performance Monitoring And Optimization Implement monitoring systems and practices to track data pipeline performance, identify bottlenecks, and optimize for improved efficiency and scalability. Common Requirements You have a strong understanding of a business or system domain with sufficient knowledge & expertise around the appropriate metrics and trends. You collaborate closely with product managers, designers, and fellow engineers to understand business needs and translate them into effective solutions. You provide technical leadership and expertise, guiding the team in making sound architectural decisions and solving challenging technical problems. Your solutions anticipate scale, reliability, monitoring, integration, and extensibility. You conduct code reviews and provide constructive feedback to ensure code quality, performance, and maintainability. You mentor and coach junior engineers, fostering a culture of continuous learning, growth, and technical excellence within the team. You play a significant role in the ongoing evolution and refinement of current tools and applications used by the team, and drive adoption of new practices within your team. You take ownership of (customer) issues, including initial troubleshooting, identification of root cause and issue escalation or resolution, while maintaining the overall reliability and performance of our systems. You set the benchmark for responsiveness and ownership and overall accountability of engineering systems. You independently drive and lead multiple features, contribute to (a) large project(s) and lead smaller projects. You can orchestrate work that spans multiples engineers within your team and keep all relevant stakeholders informed. You support your lead/EM about your work and that of the team, that they need to share with the stakeholders, including escalation of issues Qualifications Bachelor's or Master's degree in Computer Science, Data Science, or a related field. 5+ years of experience in data engineering, with a focus on data architecture, ETL, and database management. Proficiency in programming languages like Python/PySpark and Java or Scala Expertise in big data technologies such as Hadoop, Spark, Kafka, etc. In-depth knowledge of SQL and experience with various database technologies (e.g., PostgreSQL, MariaDB, NoSQL databases). Experience and expertise in building complex end-to-end data pipelines. Experience with orchestration and designing job schedules using the CICD tools like Jenkins, Airflow or Databricks Ability to work in an Agile environment (Scrum, Lean, Kanban, etc) Ability to mentor junior team members. Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and their data services (e.g., AWS Redshift, S3, Azure SQL Data Warehouse). Strong leadership, problem-solving, and decision-making skills. Excellent communication and collaboration abilities. Familiarity or certification in Databricks is a plus. We are proud to be an Equal Opportunity Employer and consider all qualified applicants for employment opportunities without regard to race, age, color, religion, gender, national origin, disability, sexual orientation, veteran status or any other category protected by the laws or regulations in the locations where we operate. California applicants can find a copy of Oportun's CCPA Notice here: https://oportun.com/privacy/california-privacy-notice/. We will never request personal identifiable information (bank, credit card, etc.) before you are hired. We do not charge you for pre-employment fees such as background checks, training, or equipment. If you think you have been a victim of fraud by someone posing as us, please report your experience to the FBI’s Internet Crime Complaint Center (IC3). Show more Show less

Posted 4 days ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Description Amazon Selection and Catalog Systems (ASCS) builds the systems that host and run the comprehensive e-commerce product catalog. We power the online shopping experience for customers worldwide, enabling them to find, discover, and purchase anything they desire. Our scaled, distributed systems process hundreds of millions of updates across billions of products, including physical, digital, and service offerings. You will be part of Catalog Support Programs (CSP) team under Catalog Support Operations (CSO) in ASCS Org. CSP provides program management, technical support, and strategic initiatives to enhance the customer experience, owning the implementation of business logic and configurations for ASCS. We are establishing a new centralized Business Intelligence team to build self-service analytical products for ASCS that provide relevant insights and data deep dives across the business. By leveraging advanced analytics and AI/ML, we will transform catalog data into predictive insights, helping prevent customer issues before they arise. Real-time intelligence will support proactive decision-making, enabling faster, data-driven decisions across the organization and driving long-term growth and an enhanced customer experience. We are looking for a creative and goal-oriented BI Engineer to join our team to harness the full potential of data-driven insights to make informed decisions, identify business opportunities and drive business growth. This role requires an individual with excellent analytical abilities, knowledge of business intelligence solutions, as well as business acumen and the ability to work with various tech/product teams across ASCS. This BI Engineer will support ASCS org by owning complex reporting and automating reporting solutions, and ultimately provide insights and drivers for decision making. You must be a self-starter and be able to learn on the go. You should have excellent written and verbal communication skills to be able to work with business owners to develop and define key business questions, and to build data sets that answer those questions. As a Business Intelligence Engineer in the CSP team, you will be responsible for analyzing petabytes of data to identify business trends and points of customer friction, and developing scalable solutions to enhance customer experience and safety. You will work closely with internal stakeholders to define key performance indicators (KPIs), implement them into dashboards and reports, and present insights in a concise and effective manner. This role will involve collaborating with business and tech leaders within ASCS and cross-functional teams to solve problems, create operational efficiencies, and deliver against high organizational standards. You should be able to apply a breadth of tools, data sources, and analytical techniques to answer a wide range of high-impact business questions and proactively uncover new insights that drive decision-making by senior leadership. As a key member of the CSP team, you will continually raise the bar on both quality and performance. You will bring innovation, a strategic perspective, a passionate voice, and an ability to prioritize and execute on a fast-moving set of priorities, competitive pressures, and operational initiatives. There will be a steep learning curve, adding a fair amount of business skills to the individual. Key job responsibilities Work closely with BIEs, Data Engineers, and Scientists in the team to collaborate effectively with product managers and create scalable solutions for business problems Create program goals and related metrics, track progress, and manage through obstacles to help the team achieve objectives Identify opportunities for improvement or automation in existing data processes and lead the changes using business acumen and data handling skills Ensure best practices on data integrity, design, testing, implementation, documentation, and knowledge sharing Contribute to supplier operations strategy development based on data analysis Lead strategic projects to formalize and scale organizational processes Build and manage weekly, monthly, and quarterly business review metrics Build data reports and dashboards using SQL, Excel, and other tools to improve business efficiency across programs Understand loosely defined or structured problems and provide BI solutions for difficult problems, delivering large-scale BI solutions Provide solutions that drive the team's business decisions and highlight new opportunities Improve code quality and optimize BI processes Demonstrate proficiency in a scripting language, data modeling, data pipeline design, and applying basic statistical methods (e.g., regression) for difficult business problems A day in the life A day in the life of a BIE-II will include: Working closely with cross-functional teams including Product/Program Managers, Software Development Managers, Applied/Research/Data Scientists, and Software Developers Building dashboards, performing root cause analysis, and sharing actionable insights with stakeholders to enable data-informed decision making Leading reporting and analytics initiatives to drive data-informed decision making Designing, developing, and maintaining ETL processes and data visualization dashboards using Amazon QuickSight Transforming complex business requirements into actionable analytics solutions. About The Team This central BIE team within ASCS will be responsible for building a structured analytical data layer, bringing in BI discipline by defining metrics in a standardized way and establishing a single definition of metrics across the catalog ecosystem. They will also identify clear sources of truth for critical data. The team will build and maintain the data pipelines for critical projects tailored to the needs of ASCS teams, leveraging catalog data to provide a unified view of product information. This will support real-time decision-making and empower teams to make data-driven decisions quickly, driving innovation. This team will leverage advanced analytics that can shift us to a proactive, data-driven approach, enabling informed decisions that drive growth and enhance the customer experience. This team will adopt best practices, standardize metrics, and continuously iterate on queries and data sets as they evolve. Automated quality controls and real-time monitoring will ensure consistent data quality across the organization. Basic Qualifications 4+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience in Statistical Analysis packages such as R, SAS and Matlab Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Experience developing and presenting recommendations of new metrics allowing better understanding of the performance of the business Experience writing complex SQL queries Bachelor's degree in BI, finance, engineering, statistics, computer science, mathematics, finance or equivalent quantitative field Experience with scripting languages (e.g., Python, Java, R) and big data technologies/languages (e.g. Spark, Hive, Hadoop, PyTorch, PySpark) to build and maintain data pipelines and ETL processes Demonstrate proficiency in SQL, data analysis, and data visualization tools like Amazon QuickSight to drive data-driven decision making. Experience applying basic statistical methods (e.g. regression, t-test, Chi-squared) as well as exploratory, deterministic, and probabilistic analysis techniques to solve complex business problems. Experience gathering business requirements, using industry standard business intelligence tool(s) to extract data, formulate metrics and build reports. Track record of generating key business insights and collaborating with stakeholders. Strong verbal and written communication skills, with the ability to effectively present data insights to both technical and non-technical audiences, including senior management Preferred Qualifications Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Master's degree in BI, finance, engineering, statistics, computer science, mathematics, finance or equivalent quantitative field Proven track record of conducting large-scale, complex data analysis to support business decision-making in a data warehouse environment Demonstrated ability to translate business needs into data-driven solutions and vice versa Relentless curiosity and drive to explore emerging trends and technologies in the field Knowledge of data modeling and data pipeline design Experience with statistical analysis, co-relation analysis, as well as exploratory, deterministic, and probabilistic analysis techniques Experience in designing and implementing custom reporting systems using automation tools Knowledge of how to improve code quality and optimizes BI processes (e.g. speed, cost, reliability) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka Job ID: A2990532 Show more Show less

Posted 4 days ago

Apply

25.0 years

4 - 7 Lacs

Cochin

On-site

GlassDoor logo

Company Overview Milestone Technologies is a global IT managed services firm that partners with organizations to scale their technology, infrastructure and services to drive specific business outcomes such as digital transformation, innovation, and operational agility. Milestone is focused on building an employee-first, performance-based culture and for over 25 years, we have a demonstrated history of supporting category-defining enterprise clients that are growing ahead of the market. The company specializes in providing solutions across Application Services and Consulting, Digital Product Engineering, Digital Workplace Services, Private Cloud Services, AI/Automation, and ServiceNow. Milestone culture is built to provide a collaborative, inclusive environment that supports employees and empowers them to reach their full potential. Our seasoned professionals deliver services based on Milestone’s best practices and service delivery framework. By leveraging our vast knowledge base to execute initiatives, we deliver both short-term and long-term value to our clients and apply continuous service improvement to deliver transformational benefits to IT. With Intelligent Automation, Milestone helps businesses further accelerate their IT transformation. The result is a sharper focus on business objectives and a dramatic improvement in employee productivity. Through our key technology partnerships and our people-first approach, Milestone continues to deliver industry-leading innovation to our clients. With more than 3,000 employees serving over 200 companies worldwide, we are following our mission of revolutionizing the way IT is deployed. Job Overview In this vital role you will be responsible for the development and implementation of our data strategy. The ideal candidate possesses a strong blend of technical expertise and data-driven problem-solving skills. As a Data Engineer, you will play a crucial role in building, and optimizing our data pipelines and platforms in a SAFE Agile product team. Chip in to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions. Deliver for data pipeline projects from development to deployment, managing, timelines, and risks. Ensure data quality and integrity through meticulous testing and monitoring. Leverage cloud platforms (AWS, Databricks) to build scalable and efficient data solutions. Work closely with product team, and key collaborators to understand data requirements. Enforce to data engineering industry standards and standards. Experience developing in an Agile development environment, and comfortable with Agile terminology and ceremonies. Familiarity with code versioning using GIT and code migration tools. Familiarity with JIRA. Stay up to date with the latest data technologies and trends What we expect of you Basic Qualifications: Doctorate degree OR Master’s degree and 4 to 6 years of Information Systems experience OR Bachelor’s degree and 6 to 8 years of Information Systems experience OR Diploma and 10 to 12 years of Information Systems experience. Demonstrated hands-on experience with cloud platforms (AWS, Azure, GCP) Proficiency in Python, PySpark, SQL. Development knowledge in Databricks. Good analytical and problem-solving skills to address sophisticated data challenges. Preferred Qualifications: Experienced with data modeling Experienced working with ETL orchestration technologies Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and DevOps Familiarity with SQL/NOSQL database Soft Skills: Skilled in breaking down problems, documenting problem statements, and estimating efforts. Effective communication and interpersonal skills to collaborate with multi-functional teams. Excellent analytical and problem solving skills. Strong verbal and written communication skills Ability to work successfully with global teams High degree of initiative and self-motivation. Team-oriented, with a focus on achieving team goals Compensation Estimated Pay Range: Exact compensation and offers of employment are dependent on circumstances of each case and will be determined based on job-related knowledge, skills, experience, licenses or certifications, and location. Our Commitment to Diversity & Inclusion At Milestone we strive to create a workplace that reflects the communities we serve and work with, where we all feel empowered to bring our full, authentic selves to work. We know creating a diverse and inclusive culture that champions equity and belonging is not only the right thing to do for our employees but is also critical to our continued success. Milestone Technologies provides equal employment opportunity for all applicants and employees. All qualified applicants will receive consideration for employment and will not be discriminated against on the basis of race, color, religion, gender, gender identity, marital status, age, disability, veteran status, sexual orientation, national origin, or any other category protected by applicable federal and state law, or local ordinance. Milestone also makes reasonable accommodations for disabled applicants and employees. We welcome the unique background, culture, experiences, knowledge, innovation, self-expression and perspectives you can bring to our global community. Our recruitment team is looking forward to meeting you.

Posted 4 days ago

Apply

5.0 years

4 - 8 Lacs

Hyderābād

On-site

GlassDoor logo

About Company: One of the cloud and data analytics company that empowers businesses to unlock insights and drive innovation through modern data solutions Role: Data Engineer Experience: 5 - 9 Years Location: Chennai & Hyderabad Notice Period: Immediate Joiner - 60 Days Roles and Responsibilities Bachelor's degree in Computer Science, Engineering, or a related field. 5+ years of experience in data engineering or a related role. Proficiency in programming languages such as Python, Java, or Scala, and scripting languages like SQL. Experience with big data technologies and ETL processes. Knowledge of cloud services (AWS, Azure, GCP) and their data-related services. Familiarity with data modeling, data warehousing, and building high-volume data pipelines. Understanding of distributed systems and microservices architecture. Experience with source control tools like Git, and CI/CD practices. Strong problem-solving skills and ability to work independently. Excellent communication and collaboration skills. Mandate Skillset - Python,Pyspark,SQL,Data bricks,AWS

Posted 4 days ago

Apply

Exploring PySpark Jobs in India

PySpark, a powerful data processing framework built on top of Apache Spark and Python, is in high demand in the job market in India. With the increasing need for big data processing and analysis, companies are actively seeking professionals with PySpark skills to join their teams. If you are a job seeker looking to excel in the field of big data and analytics, exploring PySpark jobs in India could be a great career move.

Top Hiring Locations in India

Here are 5 major cities in India where companies are actively hiring for PySpark roles: 1. Bangalore 2. Pune 3. Hyderabad 4. Mumbai 5. Delhi

Average Salary Range

The estimated salary range for PySpark professionals in India varies based on experience levels. Entry-level positions can expect to earn around INR 6-8 lakhs per annum, while experienced professionals can earn upwards of INR 15 lakhs per annum.

Career Path

In the field of PySpark, a typical career progression may look like this: 1. Junior Developer 2. Data Engineer 3. Senior Developer 4. Tech Lead 5. Data Architect

Related Skills

In addition to PySpark, professionals in this field are often expected to have or develop skills in: - Python programming - Apache Spark - Big data technologies (Hadoop, Hive, etc.) - SQL - Data visualization tools (Tableau, Power BI)

Interview Questions

Here are 25 interview questions you may encounter when applying for PySpark roles:

  • Explain what PySpark is and its main features (basic)
  • What are the advantages of using PySpark over other big data processing frameworks? (medium)
  • How do you handle missing or null values in PySpark? (medium)
  • What is RDD in PySpark? (basic)
  • What is a DataFrame in PySpark and how is it different from an RDD? (medium)
  • How can you optimize performance in PySpark jobs? (advanced)
  • Explain the difference between map and flatMap transformations in PySpark (basic)
  • What is the role of a SparkContext in PySpark? (basic)
  • How do you handle schema inference in PySpark? (medium)
  • What is a SparkSession in PySpark? (basic)
  • How do you join DataFrames in PySpark? (medium)
  • Explain the concept of partitioning in PySpark (medium)
  • What is a UDF in PySpark? (medium)
  • How do you cache DataFrames in PySpark for optimization? (medium)
  • Explain the concept of lazy evaluation in PySpark (medium)
  • How do you handle skewed data in PySpark? (advanced)
  • What is checkpointing in PySpark and how does it help in fault tolerance? (advanced)
  • How do you tune the performance of a PySpark application? (advanced)
  • Explain the use of Accumulators in PySpark (advanced)
  • How do you handle broadcast variables in PySpark? (advanced)
  • What are the different data sources supported by PySpark? (medium)
  • How can you run PySpark on a cluster? (medium)
  • What is the purpose of the PySpark MLlib library? (medium)
  • How do you handle serialization and deserialization in PySpark? (advanced)
  • What are the best practices for deploying PySpark applications in production? (advanced)

Closing Remark

As you explore PySpark jobs in India, remember to prepare thoroughly for interviews and showcase your expertise confidently. With the right skills and knowledge, you can excel in this field and advance your career in the world of big data and analytics. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies