Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
0 Lacs
Bengaluru East, Karnataka, India
On-site
Primary Skills : Pyspark, Spark and proficient in SQL Secondary Skills : Scala and Python Experience : 3 + Yrs Bachelor’s degree or foreign equivalent required from an accredited institution. Will also consider three years of progressive experience in the specialty in lieu of every year of education At least 5 years of experience in Pyspark, Spark with Hadoop distributed frameworks while handling large amount of big data using Spark and Hadoop Ecosystems in Data Pipeline creation , deployment , Maintenance and debugging Experience in scheduling and monitoring Jobs and creating tools for automation At least 4 years of experience with Scala and Python required. Proficient knowledge of SQL with any RDBMS. Strong communication skills (verbal and written) with ability to communicate across teams, internal and external at all levels. Ability to work within deadlines and effectively prioritize and execute on tasks. Preferred Qualifications: At least 1 years of AWS development experience is preferred Experience in Drive automations DevOps Knowledge is an added advantage. Advanced conceptual understanding of at least one Programming Language Advanced conceptual understanding of one database and one Operating System Understanding of Software Engineering with practice in at least one project Ability to contribute in medium to complex tasks independently Exposure to Design Principles and ability to understand Design Specifications independently Ability to run Test Cases and scenarios as per the plan Ability to accept and respond to production issues and coordinate with stake holders Good understanding of SDLC Analytical abilities Logical thinking Awareness of latest technologies and trends
Posted 1 week ago
5.0 - 10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Overview DataOps L3 The role will leverage & enhance existing technologies in the area of data and analytics solutions like Power BI, Azure data engineering technologies, ADLS, ADB, Synapse, and other Azure services. The role will be responsible for developing and support IT products and solutions using these technologies and deploy them for business users Responsibilities 5 to 10 Years of IT & Azure Data engineering technologies experience Prior experience in ETL, data pipelines, data flow techniques using Azure Data Services Working experience in Python, Py Spark, Azure Data Factory, Azure Data Lake Gen2, Databricks, Azure Synapse and file formats like JSON & Parquet. Experience in creating ADF Pipelines to source and process data sets. Experience in creating Databricks notebooks to cleanse, transform and enrich data sets. Development experience in orchestration of pipelines Good understanding about SQL, Databases, Datawarehouse systems preferably Teradata Experience in deployment and monitoring techniques. Working experience with Azure DevOps CI/CD pipelines to deploy Azure resources. Experience in handling operations/Integration with source repository Must have good knowledge on Datawarehouse concepts and Datawarehouse modelling. Working knowledge of SNOW including resolving incidents, handling Change requests /Service requests, reporting on metrics to provide insights. Collaborate with the project team to understand tasks to model tables using data warehouse best practices and develop data pipelines to ensure the efficient delivery of data. Strong expertise in performance tuning and optimization of data processing systems. Proficient in Azure Data Factory, Azure Databricks, Azure SQL Database, and other Azure data services. Develop and enforce best practices for data management, including data governance and security. Work closely with cross-functional teams to understand data requirements and deliver solutions that meet business needs. Proficient in implementing DataOps framework. Qualifications Azure data factory Azure Databricks Azure Synapse PySpark/SQL ADLS Azure DevOps with CI/CD implementation. Nice-to-Have Skill Sets: Business Intelligence tools (preferred—Power BI) DP-203 Certified.
Posted 1 week ago
3.0 - 8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description: About US At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being. Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization. Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us! Global Business Services Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations. Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence and innovation. In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services. Process Overview ARQ supports global businesses of the Bank with solutions requiring judgment application, sound business understanding and analytical perspective. The domain experience in the areas of Financial Research & Analysis, Quantitative Modeling, Risk Management and Prospecting Support provide solutions for revenue enhancement, risk mitigation and cost optimization. The division comprising of highly qualified associates operates from three locations i.e. Mumbai, GIFT City, Gurugram and Hyderabad. Job Description Individual should be capable of running technical processes relating to execution of models across an Enterprise portfolio. This will involve familiarity with technical infrastructure (specifically GCP and Quartz), coding languages and the model development and software development lifecycle. In addition, there is opportunity for the right candidates to support the implementation of new processes into target-state, as well as explore ways to make the processes more efficient and robust. Specifically: Manage model execution, results analysis and reporting related to AMGQS models. The Analyst will also work with the implementation team to ensure that this critical function is well controlled Responsibilities Write Python and/or PySpark code to automate production processes of several risk and loss measurement statistical models. Example of model execution production processes are error attribution, scenario shock, sensitivity, result publication and reporting. Leverage skills in quantitative methods to conduct ongoing monitoring of model performance. Also, possess capabilities in data science and data visualization techniques and tools Identify, analyze, monitor, and present risk factors and metrics to, and integrate with, business partners Proactively solve challenges with process design deficiencies, implementation and remediation efforts Perform operational controls, ensuring consistency and compliance across all functions, including procedures, critical use spreadsheets and tool inventory Assist with enhancing overall governance environment within the Operations space Partner with the IT team to perform system related design assessments, control effectiveness testing, process testing, issue resolution monitoring and supporting the sign-off by management of processes and controls in scope Work with model implementation experts and technology teams to design and integrate Python workflows into existing in-house target-state platform for process execution Requirements : Experience : 3 to 8 years Education: Graduate / Post Graduate from Tier 1 institutes - Bachelor's or master’s degree in mathematics, engineering, physics, statistics, or financial mathematics/engineering Foundational skills* Good understanding in numerical analysis, probability theory, linear algebra, and stochastic analysis Proficiency in Python (numpy, pandas, OOP, unittest) and Latex. Prior experience in git, bitbucket, agile view is a plus Understanding of Credit Risk modelling and processes Integrates seamlessly across complex set of stakeholders, internal partners, external resources Strong problem-solving skills and attention to detail Excellent communication and collaboration abilities Ability to thrive in a fast-paced, dynamic environment and adapt to evolving priorities and requirements. Desired Skills:- Excellent communication and collaboration abilities Work Location: Hyderabad & Mumbai Work Timings: 11am to 8pm IST
Posted 1 week ago
6.0 years
0 Lacs
Calcutta
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Microsoft Management Level Senior Associate Job Description & Summary At PwC, our people in software and product innovation focus on developing cutting-edge software solutions and driving product innovation to meet the evolving needs of clients. These individuals combine technical experience with creative thinking to deliver innovative software products and solutions. Those in software engineering at PwC will focus on developing innovative software solutions to drive digital transformation and enhance business performance. In this field, you will use your knowledge to design, code, and test cutting-edge applications that revolutionise industries and deliver exceptional user experiences. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Responsibilities: We are seeking a highly skilled and experienced Python developer with 6-7 years of hands-on experience in software development. Key Responsibilities: - Design, develop, test and maintain robust and scalable backend applications using FastAPI deliver high- performance APIs. - Write reusable efficient code following best practices - Collaborate with cross-functional teams, integrate user-facing elements with server-side logic - Architect and implement distributed, scalable microservices leveraging Temporal workflows for orchestrating complex processes. - Participate in code reviews and mentor junior developers - Debug and resolve technical issues and production incidents - Follow agile methodologies and contribute to sprint planning and estimations - Strong communication and collaboration skills - Relevant certifications are a plus Required Skills: - Strong proficiency in Python 3.x. - Collaborate closely with DevOps to implement CI/CD pipelines for Python projects, ensuring smooth deployment to production environments. - Integrate with various databases (e.g., Cosmos DB,) and message queues (e.g., Kafka, eventhub) for seamless backend operations. - Experience in one or more Python frameworks (Django, Flask, FastAPI) - Develop and maintain unit and integration tests using frameworks like pytest and unittest to ensure code quality and reliability. - Experience with Docker, Kubernetes, and cloud environments (AWS, GCP, or Azure) for deploying and managing Python services. - Familiarity with asynchronous programming (e.g., asyncio, aiohttp) and event-driven architectures. - Strong skill in PySpark for large-scale data processing - Solid understanding of Object-Oriented Programming and design principles - Proficient in using version control systems like Git Mandatory skill sets: Python Developer Preferred skill sets: Experience with Docker, Kubernetes, and cloud environments (AWS, GCP, or Azure) for deploying and managing Years of experience required: 4-7 Years Education qualification: B.Tech/B.E./MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Technology, Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Python (Programming Language) Optional Skills Acceptance Test Driven Development (ATDD), Acceptance Test Driven Development (ATDD), Accepting Feedback, Active Listening, Analytical Thinking, Android, API Management, Appian (Platform), Application Development, Application Frameworks, Application Lifecycle Management, Application Software, Business Process Improvement, Business Process Management (BPM), Business Requirements Analysis, C#.NET, C++ Programming Language, Client Management, Code Review, Coding Standards, Communication, Computer Engineering, Computer Science, Continuous Integration/Continuous Delivery (CI/CD), Creativity {+ 46 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date
Posted 1 week ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Experience: 5 to 7 years Location: Bengaluru, Gurgaon, Pune About Us: AceNet Consulting is a fast-growing global business and technology consulting firm specializing in business strategy, digital transformation, technology consulting, product development, start-up advisory and fund-raising services to our global clients across banking & financial services, healthcare, supply chain & logistics, consumer retail, manufacturing, eGovernance and other industry sectors. We are looking for hungry, highly skilled and motivated individuals to join our dynamic team. If you’re passionate about technology and thrive in a fast-paced environment, we want to hear from you. Job Summary : We are seeking an experienced and motivated Data Engineer with a strong background in Python, PySpark, and SQL, to join our growing data engineering team. The ideal candidate will have hands-on experience with cloud data platforms, data modelling, and a proven track record of building and optimising large-scale data pipelines in agile environments. Key Responsibilities : *Design, develop, and maintain robust data pipelines using Python, PySpark, and SQL. *Strong understanding of data modelling. *Proficient in using code management tools such as Git and GitHub. *Strong knowledge of query performance tuning and optimisation techniques. Role Requirements and Qualifications: *5+ years' experience as a data engineer in complex data ecosystem. *Extensive experience working in an agile environment. *Experience with cloud data platforms like AWS Redshift, Databricks. *Excellent problem-solving and communication skills. Why Join Us: *Opportunities to work on transformative projects, cutting-edge technology and innovative solutions with leading global firms across industry sectors. *Continuous investment in employee growth and professional development with a strong focus on up & re-skilling. *Competitive compensation & benefits, ESOPs and international assignments. *Supportive environment with healthy work-life balance and a focus on employee well-being. *Open culture that values diverse perspectives, encourages transparent communication and rewards contributions.
Posted 1 week ago
8.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Description Role Proficiency: This role requires proficiency in developing data pipelines including coding and testing for ingesting wrangling transforming and joining data from various sources. The ideal candidate should be adept in ETL tools like Informatica Glue Databricks and DataProc with strong coding skills in Python PySpark and SQL. This position demands independence and proficiency across various data domains. Expertise in data warehousing solutions such as Snowflake BigQuery Lakehouse and Delta Lake is essential including the ability to calculate processing costs and address performance issues. A solid understanding of DevOps and infrastructure needs is also required. Outcomes Act creatively to develop pipelines/applications by selecting appropriate technical options optimizing application development maintenance and performance through design patterns and reusing proven solutions. Support the Project Manager in day-to-day project execution and account for the developmental activities of others. Interpret requirements create optimal architecture and design solutions in accordance with specifications. Document and communicate milestones/stages for end-to-end delivery. Code using best standards debug and test solutions to ensure best-in-class quality. Tune performance of code and align it with the appropriate infrastructure understanding cost implications of licenses and infrastructure. Create data schemas and models effectively. Develop and manage data storage solutions including relational databases NoSQL databases Delta Lakes and data lakes. Validate results with user representatives integrating the overall solution. Influence and enhance customer satisfaction and employee engagement within project teams. Measures Of Outcomes TeamOne's Adherence to engineering processes and standards TeamOne's Adherence to schedule / timelines TeamOne's Adhere to SLAs where applicable TeamOne's # of defects post delivery TeamOne's # of non-compliance issues TeamOne's Reduction of reoccurrence of known defects TeamOne's Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirementst Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). TeamOne's Average time to detect respond to and resolve pipeline failures or data issues. TeamOne's Number of data security incidents or compliance breaches. Code Outputs Expected: Develop data processing code with guidance ensuring performance and scalability requirements are met. Define coding standards templates and checklists. Review code for team and peers. Documentation Create/review templates checklists guidelines and standards for design/process/development. Create/review deliverable documents including design documents architecture documents infra costing business requirements source-target mappings test cases and results. Configure Define and govern the configuration management plan. Ensure compliance from the team. Test Review/create unit test cases scenarios and execution. Review test plans and strategies created by the testing team. Provide clarifications to the testing team. Domain Relevance Advise data engineers on the design and development of features and components leveraging a deeper understanding of business needs. Learn more about the customer domain and identify opportunities to add value. Complete relevant domain certifications. Manage Project Support the Project Manager with project inputs. Provide inputs on project plans or sprints as needed. Manage the delivery of modules. Manage Defects Perform defect root cause analysis (RCA) and mitigation. Identify defect trends and implement proactive measures to improve quality. Estimate Create and provide input for effort and size estimation and plan resources for projects. Manage Knowledge Consume and contribute to project-related documents SharePoint libraries and client universities. Review reusable documents created by the team. Release Execute and monitor the release process. Design Contribute to the creation of design (HLD LLD SAD)/architecture for applications business components and data models. Interface With Customer Clarify requirements and provide guidance to the Development Team. Present design options to customers. Conduct product demos. Collaborate closely with customer architects to finalize designs. Manage Team Set FAST goals and provide feedback. Understand team members' aspirations and provide guidance and opportunities. Ensure team members are upskilled. Engage the team in projects. Proactively identify attrition risks and collaborate with BSE on retention measures. Certifications Obtain relevant domain and technology certifications. Skill Examples Proficiency in SQL Python or other programming languages used for data manipulation. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery). Conduct tests on data pipelines and evaluate results against data quality and performance specifications. Experience in performance tuning. Experience in data warehouse design and cost improvements. Apply and optimize data models for efficient storage retrieval and processing of large datasets. Communicate and explain design/development aspects to customers. Estimate time and resource requirements for developing/debugging features/components. Participate in RFP responses and solutioning. Mentor team members and guide them in relevant upskilling and certification. Knowledge Examples Knowledge Examples Knowledge of various ETL services used by cloud providers including Apache PySpark AWS Glue GCP DataProc/Dataflow Azure ADF and ADLF. Proficient in SQL for analytics and windowing functions. Understanding of data schemas and models. Familiarity with domain-related data. Knowledge of data warehouse optimization techniques. Understanding of data security concepts. Awareness of patterns frameworks and automation practices. Additional Comments We are seeking a highly experienced Senior Data Engineer to design, develop, and optimize scalable data pipelines in a cloud-based environment. The ideal candidate will have deep expertise in PySpark, SQL, Azure Databricks, and experience with either AWS or GCP. A strong foundation in data warehousing, ELT/ETL processes, and dimensional modeling (Kimball/star schema) is essential for this role. Must-Have Skills 8+ years of hands-on experience in data engineering or big data development. Strong proficiency in PySpark and SQL for data transformation and pipeline development. Experience working in Azure Databricks or equivalent Spark-based cloud platforms. Practical knowledge of cloud data environments – Azure, AWS, or GCP. Solid understanding of data warehousing concepts, including Kimball methodology and star/snowflake schema design. Proven experience designing and maintaining ETL/ELT pipelines in production. Familiarity with version control (e.g., Git), CI/CD practices, and data pipeline orchestration tools (e.g., Airflow, Azure Data Factory Skills Azure Data Factory,Azure Databricks,Pyspark,Sql
Posted 1 week ago
2.0 - 6.0 years
12 - 24 Lacs
Jaipur
Work from Office
Responsibilities: * Develop data pipelines using PySpark and SQL. * Collaborate with cross-functional teams on ML projects. * Optimize database performance through data modeling and visualization.
Posted 1 week ago
3.0 - 5.0 years
4 - 9 Lacs
Chennai
Work from Office
Are you skilled in Pyspark, SQL, and AWS with 3-5 years of experience? Blackstraw is conducting a recruitment drive in Chennai for immediate joiners to 15 Days How to Apply: Please send your resumes to nivya.varghese@blackstraw.ai . Shortlisted candidates will undergo an online coding test. Successful candidates will proceed to a face-to-face interview. Stay tuned for the Drive date post coding test clearance. Join us at Black straw for a rewarding career opportunity. & responsibilities
Posted 1 week ago
1.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Ready to Build Data That Actually Matters? At Exillar Infotech Pvt Ltd. , we don’t just move data — we move decisions. We’re looking for a Data Engineer who’s equal parts tech wizard and problem solver. If you’re fluent in Python, SQL, Azure, and dreams of scalable pipelines — let’s talk! ⸻ What You’ll Be Doing (aka Your Superpowers): • Build and maintain end-to-end ETL pipelines using ADF & Python • Transform data using PySpark notebooks in Azure Databricks • Design cloud-native architecture with Synapse, Delta Lake, Azure SQL • Optimize queries, procedures, and automate deployments via Azure DevOps • Collaborate across teams and make data cleaner, faster, smarter • Ensure security, performance, and compliance of data systems ⸻ What We’re Looking For: • 1+ years of experience as a Data Engineer • Proficiency in Azure Data Factory, Synapse, Databricks, SQL & Python • Experience with Delta Lake, Snowflake, PostgreSQL • Git, CI/CD, DevOps — we love engineers who automate everything • Strong logic, problem-solving chops & a good sense of data humor ⸻ Why You’ll Love Working With Us Be Part of Something Bigger Join a forward-thinking, automation-driven team that leads with innovation. Grow with the Flow Level up in a data-first space that fuels learning and creativity. Real Work, Real Impact Build powerful systems that drive decisions across industries. Supportive, Not Corporate Flat structure, friendly team, and zero micromanagement. Flex Your Flexibility flexible hours to match your rhythm.
Posted 1 week ago
6.0 - 11.0 years
15 - 25 Lacs
Bengaluru
Work from Office
Hiring Data Engineer in Bangalore with 6+ years experience in below skills: Must Have: - Big Data technologies: Hadoop, MapReduce, Spark, Kafka, Flink - Programming languages: Java/ Scala/ Python - Cloud: Azure, AWS, Google Cloud - Docker/Kubernetes Required Candidate profile - Strong in Communication Skills - Experience with relational SQL/ NoSQL databases- Postgres & Cassandra - Experience with ELK stack - Immediate Join is plus - Must be ready to work from office
Posted 1 week ago
14.0 - 19.0 years
7 - 12 Lacs
Noida
Work from Office
We are looking for a Senior Manager-ML Ops to join our Technology team at Clarivate. You will get the opportunity to work in a cross-cultural work environment while working on the latest web technologies with an emphasis on user-centered design. About You (Skills & Experience Required) Bachelors or masters degree in computer science, Engineering, or a related field. Overall 14+ years of experience including DevOps, machine learning operations and data engineering domain Proven experience in managing and leading technical teams. Strong understanding of MLOps practices, tools, and frameworks. Proficiency in data pipelines, data cleaning, and feature engineering is essential for preparing data for model training. Knowledge of programming languages (Python, R), and version control systems (Git) is necessary for building and maintaining MLOps pipelines. Experience with MLOps-specific tools and platforms (e.g., Kubeflow, MLflow, Airflow) can streamline MLOps workflows. DevOps principles, including CI/CD pipelines, infrastructure as code (IaC), and monitoring is helpful for automating ML workflows. Familiarity with cloud platforms (AWS, GCP, Azure) and their associated services (e.g., compute, storage, ML platforms) is essential for deploying and scaling ML models. Familiarity with container orchestration tools like Kubernetes can help manage and scale ML workloads efficiently. It would be great if you also had, Experience with big data technologies (Hadoop, Spark). Knowledge of data governance and security practices. Familiarity with DevOps practices and tools. What will you be doing in this role? Data Science Model Deployment & Monitoring : Oversee the deployment of machine learning models into production environments. Ensure continuous monitoring and performance tuning of deployed models. Implement robust CI/CD pipelines for model updates and rollbacks. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. Communicate project status, risks, and opportunities to stakeholders. Provide technical guidance and support to team members. Infrastructure & Automation : Design and manage scalable infrastructure for model training and deployment. Automate repetitive tasks to improve efficiency and reduce errors. Ensure the infrastructure meets security and compliance standards. Innovation & Improvement : Stay updated with the latest trends and technologies in MLOps. Identify opportunities for process improvements and implement them. Drive innovation within the team to enhance the MLOps capabilities.
Posted 1 week ago
2.0 - 4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Key Responsibilities: Work across range of Analytical Solutions that includes Implementation, Testing, Validation, Documentation, Monitoring, and Reporting. Extracting large data from various data sources using multiple tools such as Python/ Pyspark /Hadoop/ SQL/ etc., to perform multiple analysis. Creating data pipelines, performing model scoring, and validating model results. Utilize custom built automated frameworks for all deliverables. Work on MLOps frameworks to deploy, monitor and validate ML models in real time. Work with multiple teams such as Data Supply, ML Engineers, Technology to implement analytical solutions and models. Key Skills: 2 to 4 years of relevant experience in Data Analytics / Data Science roles. Strong programming skill in tools such as Pyspark, Python, SQL to manipulate data and create deployable production code. Knowledge of Python packages and tools. Sound Knowledge and exposure to application of Statistical / Machine Learning Techniques. Experience in building and optimizing data pipelines, architectures and data sets. Ability to interpret and translate data into meaningful business insights. Excellent verbal, written communication and presentation skills.
Posted 1 week ago
4.0 - 7.0 years
8 - 12 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Role & responsibilities We are looking for Immediate Joiner who can join in 30 Days.
Posted 1 week ago
4.0 - 8.0 years
20 - 35 Lacs
Pune, Gurugram, Bengaluru
Hybrid
Salary: 20 to 35 LPA Exp: 3 to 7 years Location: Gurgaon/Pune/Bengalore Notice: Immediate to 30 days..!! Job Profile: Experienced Data Engineer with a strong foundation in designing, building, and maintaining scalable data pipelines and architectures. Skilled in transforming raw data into clean, structured formats for analytics and business intelligence. Proficient in modern data tools and technologies such as SQL, T-SQL, Python, Databricks, and cloud platforms (Azure). Adept at data wrangling, modeling, ETL/ELT development, and ensuring data quality, integrity, and security. Collaborative team player with a track record of enabling data-driven decision-making across business units. As a Data engineer, Candidate will work on the assignments for one of our Utilities clients. Collaborating with cross-functional teams and stakeholders involves gathering data requirements, aligning business goals, and translating them into scalable data solutions. The role includes working closely with data analysts, scientists, and business users to understand needs, designing robust data pipelines, and ensuring data is accessible, reliable, and well-documented. Regular communication, iterative feedback, and joint problem-solving are key to delivering high-impact, data-driven outcomes that support organizational objectives. This position requires a proven track record of transforming processes, driving customer value, cost savings with experience in running end-to-end analytics for large-scale organizations. Design, build, and maintain scalable data pipelines to support analytics, reporting, and advanced modeling needs. Collaborate with consultants, analysts, and clients to understand data requirements and translate them into effective data solutions. Ensure data accuracy, quality, and integrity through validation, cleansing, and transformation processes. Develop and optimize data models, ETL workflows, and database architectures across cloud and on-premises environments. Support data-driven decision-making by delivering reliable, well-structured datasets and enabling self-service analytics. Provides seamless integration with cloud platforms (Azure), making it easy to build and deploy end-to-end data pipelines in the cloud Scalable clusters for handling large datasets and complex computations in Databricks, optimizing performance and cost management. Must to have Client Engagement Experience and collaboration with cross-functional teams Data Engineering background in Databricks Capable of working effectively as an individual contributor or in collaborative team environments Effective communication and thought leadership with proven record. Candidate Profile: Bachelors/masters degree in economics, mathematics, computer science/engineering, operations research or related analytics areas 3+ years experience must be in Data engineering. Hands on experience on SQL, Python, Databricks, cloud Platform like Azure etc. Prior experience in managing and delivering end to end projects Outstanding written and verbal communication skills Able to work in fast pace continuously evolving environment and ready to take up uphill challenges Is able to understand cross cultural differences and can work with clients across the globe.
Posted 1 week ago
4.0 - 9.0 years
15 - 30 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Role - Data Analyst (Pyspark/SQL) Location - Bengalore Type - Hybrid Position - Full Time We are looking a Data Analyst who has strong expertise into PySpark & SQL Roles and Responsibilities Develop expertise in SQL queries for complex data analysis and troubleshooting issues related to data extraction, manipulation, transformation, mining, processing, wrangling, reporting, modeling, classification. Desired Candidate Profile 4-9 years of experience in Data Analytics with a strong background in PySpark programming language.
Posted 1 week ago
6.0 years
0 Lacs
Kolkata metropolitan area, West Bengal, India
On-site
Job Title: Senior Data Engineer – Databricks | Azure | PySpark Location: Kolkata | Bengaluru | Hyderabad Experience: 6+ Years Job Type: Full-Time Industry: Insurance Background is a Plus Job Summary: We are seeking a highly skilled and experienced Senior Associate with a strong background in Databricks, SQL, PySpark , and Microsoft Azure . The ideal candidate will have Insurance domain knowledge and be responsible for building and optimizing data pipelines and architectures, transforming raw data into usable formats, and collaborating with data scientists, analysts, and other stakeholders to support data-driven decision-making. Key Responsibilities: Design, build, and maintain scalable data pipelines using Apache Spark and PySpark in Databricks . Develop and manage data integration and ETL workflows in Azure Data Factory and Databricks. Optimize and troubleshoot Spark jobs and ensure efficient use of compute and memory resources. Write complex SQL queries to extract, transform, and analyze large datasets across multiple data sources. Implement data governance, quality, and security practices as per organizational standards. Collaborate with cross-functional teams to define data requirements and implement scalable solutions. Monitor, maintain, and optimize performance of data platforms hosted on Azure (e.g., ADLS, Azure Synapse, Azure SQL). Provide technical leadership and mentoring to junior team members. Required Skills & Qualifications: 6+ years of hands-on experience in Data Engineering . Strong expertise in Databricks including notebooks, clusters, Delta Lake, and job orchestration. Proficient in PySpark and distributed computing using Apache Spark. Expert-level knowledge of SQL for data analysis, transformation, and optimization. Extensive experience with Azure Cloud Services including Azure Data Factory, Azure Data Lake Storage (ADLS) , Azure Synapse , and Azure SQL DB . Experience with CI/CD pipelines , version control (e.g., Git), and DevOps best practices in Azure environment. Solid understanding of data modeling , data warehousing , and data governance practices. Strong analytical and problem-solving skills. Strong communication skills and end client facing experience Preferred Qualifications: Databricks Certified Developer or Microsoft Azure certification(s) (e.g., DP-203). Experience with real-time data streaming (e.g., Kafka, Azure Event Hubs) is a plus. Familiarity with scripting languages such as Python for automation. Experience working in Agile/Scrum environments. Insurance domain knowledge is a plus.
Posted 1 week ago
5.0 - 10.0 years
20 - 35 Lacs
Pune
Hybrid
Our client is Global IT Service & Consulting Organization Data Software Engineer Location- Pune Notice period: Immediate to 60 days F2F interview on 27th July ,Sunday in Pune location Exp:5 -12 years Skill: Python, Spark, Azure Databricks/GCP/AWS Data Software Engineer - Spark, Python, (AWS, Kafka or Azure Databricks or GCP) Job Description: 5-12 Years of in Big Data & Data related technology experience Expert level understanding of distributed computing principles Expert level knowledge and experience in Apache Spark Hands on programming with Python Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop Experience with building stream-processing systems, using technologies such as Apache Storm or Spark-Streaming Experience with messaging systems, such as Kafka or RabbitMQ Good understanding of Big Data querying tools, such as Hive, and Impala Experience with integration of data from multiple data sources such as RDBMS (SQL Server, Oracle), ERP, Files Good understanding of SQL queries, joins, stored procedures, relational schemas Experience with NoSQL databases, such as HBase, Cassandra, MongoDB Knowledge of ETL techniques and frameworks Performance tuning of Spark Jobs Experience with native Cloud data services AWS or AZURE Databricks Ability to lead a team efficiently Experience with designing and implementing Big data solutions Practitioner of AGILE methodology
Posted 1 week ago
4.0 - 12.0 years
0 Lacs
Gurugram, Haryana, India
On-site
It’s more than a career at NAB. It’s about more meaningful work, more global opportunities and more innovation beyond boundaries . Your job is just one part of your life. When you bring your ideas, energy, and hunger for growth, you’ll be recognised and rewarded for your contribution in return. You’ll have our support to excel for our customers, deliver positive change for our communities and grow your career. NAB has established NAB Innovation Centre India as a centre for operations and technology excellence to support NAB deliver faster, better, and more personalized experience to customers and colleagues. At NAB India, we’re ramping-up and growing at a very fast pace. Our passionate leaders recruit and develop high performing people, empowering them to deliver exceptional outcomes to make a positive difference in the lives of our customers and our communities. Please apply only if you are available for Face to Face interview on 2-August in Gurugram. What will you bring: 4-12 years technical experience Technical Domain experience (Subject Matter Expertise in Technology or Tools) Solid experience, knowledge and skills in Data Engineering, BI/software development such as ELT/ETL, data extraction and manipulation in Data Lake/Data Warehouse/Lake House environment. Hands on programming experience in writing Python, SQL, Unix Shell scripts, Pyspark scripts, in a complex enterprise environment Experience in configuration management using Ansible/Jenkins/GIT Hands on cloud-based solution design, configuration and development experience with Azure and AWS Hands on experience of using AWS Services - S3,EC2, EMR, SNS, SQS, Lambda functions, Redshift Hands on experience Of building Data pipelines to ingest, transform on Databricks Delta Lake platform from a range of data sources - Data bases, Flat files, Streaming etc.. Knowledge of Data Modelling techniques and practices used for a Data Warehouse/Data Mart application. Experience with Source Control Tools Github or BitBucket Exposure to relational Databases - Oracle or MS SQL or DB2 (SQL/PLSQL, Database design, Normalisation, Execution plan analysis, Index creation and maintenance, Stored Procedures) , PostGres/MySQL Skilled in querying data from a range of data sources that store structured and unstructured data Typical Experience At least 4 years experience in software development. Tertiary qualifications in computer science, IT or electrical engineering (computing science major). A diverse and inclusive workplace works better for everyone: Our goal is to foster a culture that fills us with pride, rooted in trust and respect. NAB is committed to creating a positive and supportive environment where everyone is encouraged to embrace their true, authentic selves. A diverse and inclusive workplace where our differences are celebrated, and our contributions are valued. It’s a huge part of what makes NAB such a special place to be. More focus on you: We’re committed to delivering a positive experience for our colleagues and a workplace you can be proud of. We support our colleagues to balance their careers and personal life through flexible working arrangements such as hybrid working and job sharing and competitive financial and lifestyle benefits. We invest in our colleagues through world class development programs (Distinctive Leadership and Career Qualified in Banking), and empower you to learn, grow and pursue exciting career opportunities Join NAB India: This is your chance to join NAB India and along with your experience and expertise to help shape an innovation driven organisation that focuses on making a positive impact in the lives of its customers, colleagues and communities To know more about us please click here To know more about NAB Global Innovation Centres please click here We’re on LinkedIn: NAB Innovation Centre India
Posted 1 week ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : PySpark Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems, contributing to the overall efficiency and reliability of data operations. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Collaborate with cross-functional teams to understand data requirements and deliver effective solutions. - Monitor and optimize data pipelines for performance and reliability. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark. - Strong understanding of data modeling and database design principles. - Experience with ETL tools and processes. - Familiarity with cloud platforms such as AWS or Azure. - Knowledge of data warehousing concepts and best practices. Additional Information: - The candidate should have minimum 3 years of experience in PySpark. - This position is based at our Pune office. - A 15 years full time education is required.
Posted 1 week ago
4.0 - 7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Data Engineer Location: Hyderabad, India Employment Type: Full-time Experience : 4 to 7 Years About NationsBenefits: At NationsBenefits, we are leading the transformation of the insurance industry by developing innovative benefits management solutions. We focus on modernizing complex back-office systems to create scalable, secure, and high-performing platforms that streamline operations for our clients. As part of our strategic growth, we are focused on platform modernization — transitioning legacy systems to modern, cloud-native architectures that support the scalability, reliability, and high performance of core back-office functions in the insurance domain. Position Overview: We are seeking a self-driven Data Engineer with 4–7 years of experience to build and optimize scalable ETL/ELT pipelines using Azure Databricks, PySpark, and Delta Lake. The role involves working across scrum teams to develop data solutions, ensure data governance with Unity Catalog, and support real-time and batch processing. Strong problem-solving skills, T-SQL expertise, and hands-on experience with Azure cloud tools are essential. Healthcare domain knowledge is a plus. Job Description: Work with different scrum teams to develop all the quality database programming requirements of the sprint. Experience in Azure cloud platforms like Advanced Python Programming, Databricks , Azure SQL , Data factory (ADF), Data Lake, Data storage, SSIS. Create and deploy scalable ETL/ELT pipelines with Azure Databricks by utilizing PySpark and SQL . Create Delta Lake tables with ACID transactions and schema evolution to support real-time and batch processing. Experience in Unity Catalog for centralized data governance, access control, and data lineage tracking. Independently analyse, solve, and correct issues in real time, providing problem resolution end-to-end. Develop unit tests to be able to test them automatically. Use SOLID development principles to maintain data integrity and cohesiveness. Interact with product owner and business representatives to determine and satisfy needs. Sense of ownership and pride in your performance and its impact on company’s success. Critical thinker and problem-solving skills. Team player. Good time-management skills. Great interpersonal and communication skills. Mandatory Qualifications: 4-7 years of experience as a Data Engineer. Self-driven with minimal supervision. Proven experience with T-SQL programming, Azure Databricks, Spark (PySpark/Scala), Delta Lake, Unity Catalog, ADLS Gen2 Microsoft TFS, Visual Studio, Devops exposure. Experience with cloud platforms such as Azure or any. Analytical, problem-solving mindset. Preferred Qualifications HealthCare domain knowledge
Posted 1 week ago
3.0 - 8.0 years
12 - 18 Lacs
Bengaluru
Work from Office
Role & responsibilities We are hiring experienced Data Engineers for immediate joining at our Bangalore office. If you have strong hands-on experience in PySpark and Big Data ecosystems, wed like to talk to you. What were looking for: Minimum 3 years of experience in data engineering Strong expertise in PySpark Hands-on experience with Hadoop and Big Data technologies Experience with Azure cloud platform Understanding of Gen AI concepts (preferred, not mandatory) Ability to work in fast-paced environments and deliver quickly Why join us: Immediate joining opportunity Work on enterprise-scale data projects Exposure to latest cloud and AI technologies
Posted 1 week ago
6.0 - 10.0 years
9 - 14 Lacs
Pune
Work from Office
Role & responsibilities Design, build, and maintain scalable data pipelines on AWS using services like Glue, Lambda, EMR, S3, Redshift, and Athena Develop and optimize ETL/ELT processes for data ingestion, transformation, and loading from diverse sources Collaborate with data analysts, scientists, and other stakeholders to understand data needs and deliver solutions Implement data quality checks and monitoring solutions to ensure integrity and reliability Work with structured and unstructured data from cloud and on-premise sources Develop and manage infrastructure as code using tools like Terraform or CloudFormation Maintain data documentation and metadata cataloging using tools like AWS Glue Data Catalog Ensure best practices in security, scalability, and performance in all data solutions Troubleshoot performance and data integrity issues Preferred candidate profile
Posted 1 week ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Company Description Decision Point develops analytics and big data solutions for CPG, retail, and consumer-focused industries, working with global Fortune 500 clients. We provide analytical insights and solutions that help develop sales and marketing strategies by leveraging diverse data sources such as Point of Sale data, syndicated category data, and primary shipments. The company was founded by Ravi Shankar and his classmates from IIT Madras, who have extensive experience in the CPG and marketing analytics domains. At Decision Point, you will collaborate with data scientists, business consultants, and tech-savvy engineers passionate about extracting value from data for our clients. Role Description This is a full-time on-site role for a Lead Data Engineer (PySpark + Databricks) located in the Greater Bengaluru Area. The Lead Data Engineer will be responsible for designing, developing, and deploying data processing systems. Day-to-day tasks will include creating data models, performing ETL processes, managing data warehousing solutions, and conducting data analytics. The role involves working closely with other engineers, data scientists, and business consultants to deliver high-quality data-driven solutions. Qualifications Proficiency in Data Engineering, Data Modeling, and Data Analytics Experience with Extract Transform Load (ETL) processes and Data Warehousing Ability to work with PySpark and Databricks Strong problem-solving skills and analytical thinking Excellent communication and teamwork abilities Experience in CPG or retail industries is a plus Bachelor's or Master's degree in Computer Science, Engineering, or related field
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Join our digital revolution in NatWest Digital X In everything we do, we work to one aim. To make digital experiences which are effortless and secure. So we organise ourselves around three principles: engineer, protect, and operate. We engineer simple solutions, we protect our customers, and we operate smarter. Our people work differently depending on their jobs and needs. From hybrid working to flexible hours, we have plenty of options that help our people to thrive. This role is based in India and as such all normal working days must be carried out in India. Job Description Join us as a Java And Pyspark Developer This is an opportunity for a driven Software Engineer to take on an exciting new career challenge Day-to-day, you'll be engineering and maintaining innovative, customer centric, high performance, secure and robust solutions It’s a chance to hone your existing technical skills and advance your career while building a wide network of stakeholders We're offering this role at associate level What you'll do In your new role, you’ll be working within a feature team to engineer software, scripts and tools, as well as liaising with other engineers, architects and business analysts across the platform. You’ll also be: Producing complex and critical software rapidly and of high quality which adds value to the business Working in permanent teams who are responsible for the full life cycle, from initial development, through enhancement and maintenance to replacement or decommissioning Collaborating to optimise our software engineering capability Designing, producing, testing and implementing our working software solutions Working across the life cycle, from requirements analysis and design, through coding to testing, deployment and operations The skills you'll need You'll need at least six years of experience in PySpark, SQL, Snowflake and Big Data. You'll also need experience in JIRA, Confluence and REST API Call. Experience working with AWS in Financial domain is desired. You’ll also need: Experience of working with development and testing tools, bug tracking tools and wikis Experience in multiple programming languages or low code toolsets Experience of DevOps and Agile methodology and associated toolsets Experience in developing Unit Test Cases and executing them Experience of implementing programming best practice, especially around scalability, automation, virtualisation, optimisation, availability and performance
Posted 1 week ago
18.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Location: Chennai Job Type : Full-time | On-site Are you a seasoned Data Engineering leader ready to drive innovation with cutting-edge cloud and data solutions? Copperpod AI is looking for a Data Director to lead modern data warehouse and pipeline initiatives using Azure and open-source big data tools. Responsibilities: Lead the design and development of modern data warehouse solutions using the Azure stack. Architect scalable, secure, and reliable data pipelines using Azure Data Factory, DataBricks, and Synapse. Collaborate with business and BI teams to translate reporting needs into data models. Guide and mentor data engineering teams; conduct code reviews and troubleshoot complex issues. Drive architecture discussions with client stakeholders and ensure best practices in cloud data engineering. Manage orchestration via Airflow and contribute to automation in DevOps environments. MUST Haves: 12–18 years of total IT experience with 6+ years in data engineering and warehousing. Deep knowledge of ETL, Data Pipelines, Dimensional Modelling (Star & Snowflake). Proven experience with Python, PySpark, SQL, Azure Data Factory, Synapse, Databricks, and Airflow. Exposure to streaming tools like Kafka, Kinesis, and NoSQL databases (MongoDB, Cassandra, Neo4j). Familiarity with Terraform, Git, and CI/CD tools like CircleCI. Experience working in Agile and DevOps environments. Brownie Points if you have the following: DataBricks Certification. Experience with Delta Lake, structured & unstructured data (imaging, geospatial). Strong communication and problem-solving skills. Enthusiasm to lead from the front and shape the future of data architecture. Ready to take the next step? Join a forward-thinking team at Copperpod AI that thrives on solving complex challenges with innovative data solutions. Be the architect of transformation. Send your resumes to : hiring@copperpoddigital.com About us: Copperpod AI is a global AI partner to enterprises, headquartered in New Jersey, USA. We specialize in solving complex business problems through tailored, outcome-focused AI solutions built to scale across decision intelligence, operations, and customer experience. Our work is anchored on three core pillars: 1. Innovation through rapid prototyping and experimentation in our AI Lab; 2. Execution via agile pods of top-tier talent, trained in enterprise delivery; and; 3. Impact, through measurable outcomes aligned to strategic business goals. Through our AI Foundry, we co-develop with clients to move from idea to real-world deployment delivering business outcomes across fashion, retail, manufacturing, financial services, healthcare and others. At Copperpod AI, we don’t just deploy technology we shape transformation that lasts.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France