Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 5.0 years
5 - 15 Lacs
Hyderabad
Work from Office
Company Overview Accordion works at the intersection of sponsors and management teams throughout every stage of the investment lifecycle, providing hands-on, execution-focused support to elevate data and analytics capabilities. So, what does it mean to work at Accordion? It means joining 1,000+ analytics, data science, finance & technology experts in a high-growth, agile, and entrepreneurial environment while transforming how portfolio companies drive value. It also means making your mark on Accordions futureby embracing a culture rooted in collaboration and a firm-wide commitment to building something great, together. Headquartered in New York City with 10 offices worldwide, Accordion invites you to join our journey. Data & Analytics (Accordion | Data & Analytics) Accordion's Data & Analytics (D&A) team delivers cutting-edge, intelligent solutions to a global clientele, leveraging a blend of domain knowledge, sophisticated technology tools, and deep analytics capabilities to tackle complex business challenges. We partner with Private Equity clients and their Portfolio Companies across diverse sectors, including Retail, CPG, Healthcare, Media & Entertainment, Technology, and Logistics. D&A team delivers data and analytical solutions designed to streamline reporting capabilities and enhance business insights across vast and complex data sets ranging from Sales, Operations, Marketing, Pricing, Customer Strategies, and more. Location: Hyderabad, Telangana Role Overview: Accordion is looking for Senior Data Engineer with Database/Data Warehouse/Business Intelligence experience. He/she will be responsible for the design, development, configuration/deployment, and maintenance of the above technology stack. He/she must have in depth understanding of various tools & technologies in the above domain to design and implement robust and scalable solutions which address client current and future requirements at optimal costs. The Senior Data Engineer should be able to understand various architecture and recommend right fit depending on the use case of the project. A successful Senior Data Engineer should possess strong working business knowledge, familiarity with multiple tools and techniques along with industry standards and best practices in Business Intelligence and Data Warehousing environment. He/she should have strong organizational, critical thinking, and communication skills. What You will do: Understand the business requirements thoroughly to design and develop the BI architecture. Determine business intelligence and data warehousing solutions that meet business needs. Perform data warehouse design and modelling according to established standards. Work closely with the business teams to arrive at methodologies to develop KPIs and Metrics. Work with Project Manager in developing and executing project plans within assigned schedule and timeline. Develop standard reports and functional dashboards based on business requirements. Ensure to develop and deliver high quality reports in timely and accurate manner. Conduct training programs and knowledge transfer sessions to junior developers when needed. Recommend improvements to provide optimum reporting solutions. Ideally, you have: Undergraduate degree (B.E/B.Tech.) from tier-1/tier-2 colleges are preferred. 2 - 5 years of experience in related field. Proven expertise in SSIS, SSAS and SSRS (MSBI Suite). In-depth knowledge of databases (SQL Server, MySQL, Oracle etc.) and data warehouse (Azure Synapse, AWS Redshift, Google BigQuery, Snowflake etc.). In-depth knowledge of business intelligence tools (any one of Power BI, Tableau, Qlik, DOMO, Looker etc.). Good understanding of Azure (Data Factory & Pipelines, SQL Database & Managed Instances, DevOps, Logic Apps, Analysis Services), AWS (Glue, Aurora Database, Dynamo Database, Redshift, QuickSight). Proven abilities to take on initiative and be innovative. Analytical mind with problem solving attitude. Why Explore a Career at Accordion: High growth environment: Semi-annual performance management and promotion cycles coupled with a strong meritocratic culture, enables fast track to leadership responsibility. Cross Domain Exposure: Interesting and challenging work streams across industries and domains that always keep you excited, motivated, and on your toes. Entrepreneurial Environment : Intellectual freedom to make decisions and own them. We expect you to spread your wings and assume larger responsibilities. Fun culture and peer group: Non-bureaucratic and fun working environment; Strong peer environment that will challenge you and accelerate your learning curve. Other benefits for full time employees: Health and wellness programs that include employee health insurance covering immediate family members and parents, term life insurance for employees, free health camps for employees, discounted health services (including vision, dental) for employee and family members, free doctors consultations, counsellors, etc. Corporate Meal card options for ease of use and tax benefits. Team lunches, company sponsored team outings and celebrations. Cab reimbursement for women employees beyond a certain time of the day. Robust leave policy to support work-life balance. Specially designed leave structure to support woman employees for maternity and related requests. Reward and recognition platform to celebrate professional and personal milestones. A positive & transparent work environment including various employee engagement and employee benefit initiatives to support personal and professional learning and development.
Posted 1 week ago
5.0 - 9.0 years
20 - 30 Lacs
Pune
Hybrid
Role & responsibilities Perform AI model training activities such as generating/loading large datasets, document samples, process documentation, and prompts to support rapid and complete development of high impact models. Execute daily monitoring of AI and process performance. Identify, troubleshoot, and resolve issues with AI-based process performance in collaboration with users and various stakeholders Identify and drive implementation of improvements in process, AI prompts, and model accuracy and completeness in conjunction with Ecolab Digital AI team. Support objectives to ensure AI performance meets business value objectives. Ensure compliance with established responsible AI policies. Maintain documentation on AI processes. Monitoring the performance of business processes, such as cash applications, order processes, and billing. Identifying issues and resolving them in collaboration with development teams. Providing training data sets for AI models. Interacting with users to understand and solve performance issues in automated processes. Analyzing data and metrics to detect anomalies and optimize processes. Preferred candidate profile Minimum Qualifications: Bachelor's degree in Computer Science, Data Science, or a related field. Masters degree preferred Process domain expertise Experience with AI/ML operations and monitoring tools. Strong problem-solving and analytical skills. Knowledge of AI governance and ethical guidelines. Excellent communication and collaboration skills. Knowledge of machine learning frameworks and libraries Preferred Qualifications: Deep understanding of business processes and how they operate. Technical aptitude to understand systems and processes, including AI solutions. Experience with SAP and ServiceNow is ideal. Competence in data analysis and the ability to interact with large volumes of data. Focus on business analysis and problem-solving rather than software development. Ability to interact with users and provide support in optimizing processes.
Posted 1 week ago
5.0 - 9.0 years
14 - 17 Lacs
Pune
Work from Office
Diacto is seeking an experienced and highly skilled Data Architect to lead the design and development of scalable and efficient data solutions. The ideal candidate will have strong expertise in Azure Databricks, Snowflake (with DBT, GitHub, Airflow), and Google BigQuery. This is a full-time, on-site role based out of our Baner, Pune office. Qualifications: B.E./B.Tech in Computer Science, IT, or related discipline MCS/MCA or equivalent preferred Key Responsibilities: Design, build, and optimize robust data architecture frameworks for large-scale enterprise solutions Architect and manage cloud-based data platforms using Azure Databricks, Snowflake, and BigQuery Define and implement best practices for data modeling, integration, governance, and security Collaborate with engineering and analytics teams to ensure data solutions meet business needs Lead development using tools such as DBT, Airflow, and GitHub for orchestration and version control Troubleshoot data issues and ensure system performance, reliability, and scalability Guide and mentor junior data engineers and developers
Posted 1 week ago
7.0 - 10.0 years
9 - 12 Lacs
Mumbai
Work from Office
Experience 7-10 years Educational qualification - bachelors degree or higher in a related field Summary Applies the principles of software engineering to design, develop, maintain, test, and evaluate computer software that provide business capabilities, solutions, and/or product suites. Provides systems life cycle management (e.g., analyses, technical requirements, design, coding, testing, implementation of systems and applications software, etc.) to ensure delivery of technical solutions is on time and within budget. Researches and supports the integration of emerging technologies. Provides knowledge and support for applications development, integration, and maintenance. Develops program logic for new applications or analyzes and modifies logic in existing applications. Analyzes requirements, tests, and integrates application components. Ensures that system improvements are successfully implemented. May focus on web/internet applications specifically, using a variety of languages and platforms. Defines application complexity drivers, estimates development efforts, creates milestones and/or timelines, and tracks progress towards completion. Application Development/Programming Identifies areas for improvement and develops innovative enhancements using available software development tools following design requirements of customer. System and Technology Integration Interprets internal/external business challenges and recommends integration of the appropriate systems, applications and technology to provide a fully functional solution to a business problem. Development and support of the activities outlined below (Note: Other items may arise that are not directly referenced in this scope that may include technology updates, technology expansion, DevOps pipeline changes, information security, and technical debt compliance) New Development Development of new features/functionality driven by PI (program Increment). This will include documenting Features, Stories, obtaining Approvals from Business and UPS IT Product Owners, story analysis, design of the required solution, review with UPS SME(s), coding, testing, non-functional requirements (reporting, production capacity, performance, security), and migration/deployment. Scope at high level The scope of this project includes the following activities, but not limited to: Develop new integration pipelines with SC360 Data bricks, Azure functions, Azure data factory, Azure DevOps, Cosmos dB, Oracle, Azure SQL, SSIS Packages. Work in alignment with business teams to support development effort for all SC360 data related PI items. Develop fixes for defects, issues identified in production environment. Build POCs as needed to supplement SC360 platform. Develop and implement architectural changes as needed in SC360 platform to increase efficiency, reduce cost, and monitor the platform. Provide production support assistance as needed. NFR includes but not limited to able to build according to UPS Coding Standards including Security Compliance Required Skills General skills Strong communication skills (both oral and written) Will need to work closely with UPS IT, Business Product Owners, and potential direct engagement with UPS customers. Agile life-cycle management Vulnerability/Threat Analysis Testing Deployments across environments and segregation of duties Technical skills Mandatory. Experience with Azure Data bricks, SQL, ETL SSIS Packages Very Critical. Azure Data Factory, Function Apps, DevOps A must Experience with Azure and other cloud technologies Database Oracle, SQL Server and COSMOS experience needed. Azure Services (key vault, app config, Blob storage, Redis cache, service bus, event grid, ADLS, App insight etc.) Knowledge of STRIIMs Nice to have. Microservicesexperience, preferred. Experience with Angular, .NET core Not critical
Posted 1 week ago
2.0 - 6.0 years
10 - 15 Lacs
Hyderabad
Work from Office
JD Senior Data Engineer Job Location : Hyderabad We are looking for an experienced Data Engineer with 3+ years of expertise in Databricks, AWS, Scala, and Apache Spark to work with one of our leading Japanese automotive based Client in North America, where cutting-edge technology meets innovation. The ideal candidate will have a strong foundation in big data processing, cloud technologies, and performance tuning to enable efficient data management. You'll also collaborate with global teams, work with advanced cloud technologies, and contribute to a forward-thinking data ecosystem that powers the future of automotive engineering. Who Can Apply: Only candidates who can join immediately or within 1 week can apply. Ideal for those seeking technical growth and work on a global project with cutting-edge technologies. Best suited for professionals passionate about innovation and problem-solving Key Responsibilities: Architect, design, and implement scalable ETL pipelines for large data processing. Develop and optimize data solutions using Databricks, AWS, Scala, and Spark. Ensure high-performance data processing with distributed computing techniques. Implement best practices for data modeling, transformation, and governance. Work closely with cross-functional teams to improve data reliability and efficiency. Monitor and troubleshoot data pipelines for performance improvements. Required Skills & Qualifications: Excellent communication and ability to handle direct Client interactions. 2+ years of experience in Data Engineering. Expertise in Databricks, AWS, Scala, and Apache Spark. Strong knowledge of big data architecture, ETL processes, and cloud data solutions. Ability to write optimized and scalable Spark jobs for distributed computing.
Posted 1 week ago
5.0 - 10.0 years
7 - 12 Lacs
Pune
Work from Office
Create Solution Outline and Macro Design to describe end to end product implementation in Data Platforms including, System integration, Data ingestion, Data processing, Serving layer, Design Patterns, Platform Architecture Principles for Data platform Contribute to pre-sales, sales support through RfP responses, Solution Architecture, Planning and Estimation Contribute to reusable components / asset / accelerator development to support capability development Participate in Customer presentations as Platform Architects / Subject Matter Experts on Big Data, Azure Cloud and related technologies Participate in customer PoCs to deliver the outcomes Participate in delivery reviews / product reviews, quality assurance and work as design authority Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience in designing of data products providing descriptive, prescriptive, and predictive analytics to end users or other systems Experience in data engineering and architecting data platforms Experience in architecting and implementing Data Platforms Azure Cloud Platform Experience on Azure cloud is mandatory (ADLS Gen 1 / Gen2, Data Factory, Databricks, Synapse Analytics, Azure SQL, Cosmos DB, Event hub, Snowflake), Azure Purview, Microsoft Fabric, Kubernetes, Terraform, Airflow Experience in Big Data stack (Hadoop ecosystem Hive, HBase, Kafka, Spark, Scala PySpark, Python etc.) with Cloudera or Hortonworks Preferred technical and professional experience Experience in architecting complex data platforms on Azure Cloud Platform and On-Prem Experience and exposure to implementation of Data Fabric and Data Mesh concepts and solutions like Microsoft Fabric or Starburst or Denodo or IBM Data Virtualisation or Talend or Tibco Data Fabric Exposure to Data Cataloging and Governance solutions like Collibra, Alation, Watson Knowledge Catalog, dataBricks unity Catalog, Apache Atlas, Snowflake Data Glossary etc
Posted 1 week ago
3.0 - 7.0 years
5 - 9 Lacs
Hyderabad
Work from Office
Ability to take full ownership and deliver component or functionality. Supporting the team to deliver project features with high quality and providing technical guidance. Responsible to work effectively individually and with team members toward customer satisfaction and success Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise SQL ADF Azure Data Bricks Preferred technical and professional experience PostgreSQL, MSSQL Eureka Hystrix, zuul/API gateway In-Memory storage
Posted 1 week ago
12.0 - 17.0 years
6 - 10 Lacs
Bengaluru
Work from Office
The role of Sr. Analytics Consultant / Lead exists within the Analytics offshore and onshore teams to develop innovative Analytics solutions using Data Visualization / BI solutions, Gen AI / ML, Data Analytics and Data Science, that will generate tangible value for business stakeholders and customers. The role has a focus on using sophisticated data analysis and modelling techniques, alongside cognitive computing and artificial intelligence, with business acumen to discover insights that will guide business decisions and uncover business opportunities. Key AccountabilitiesManaging large Advanced Analytics teams owning client delivery and team growth accountabilities using consulting mind set. Consulting, transformation, building proposal & competencyExperience within Insurance services industry essential. Confident in leading large scale data projects, working in product teams.Highly experience in solving business problem and Data Lead Tech solutions. Managing diverse cross functional teams with a strong commercial mindset Interpret big data to identify and analyse emerging trends and produce appropriate business insights which monitors the performance of the Portfolios and to continuously drive an improvement in business results.Develop advance business performance & Data Analytics Tools to assist the Senior Specialists, Portfolio Managers, business stakeholders (including but not limited to Portfolio, Customer & Marketing functions) & wider Commercial team members to make required data-based recommendations, implement and monitor them accurately.Develop statistical models to predict business performance and customer behaviour. Research customer behaviours and attitudes leading to in depth knowledge and understanding of differences in customer level profitability.Promote innovation through improving current processes and developing new analytical methods and factors. Identify, investigate and introduce additional rating factors with the objective of improving product risk and location selection to maximise profit.Provide consulting advice and solution to solve the Business clients hardest pain points and realise biggest business opportunities through advanced use of data and algorithms. Can work on projects across functions based on needs of the business.Actively add value by lifting the Business capability of process automation.Build, enhance and maintain quality relationships with all internal and external customers. Adopt a customer focus in the delivery of internal/external services. Build positive stakeholder relationships to foster a productive environment and open communication channels.Bring new Data Science thinking to the group by staying on top of latest developments, industry meet-ups etc.Expert level knowledge of ,Gen AI/ML Python, BI and Visualization, transformation and business consulting building technical proposalKnowledge of statistical concepts Expert levelTypically, this role would have 12 years plus of relevant experience (in a Data Science or Advanced Analytical consulting field of work). At least 10 years of leadership experience necessary.Experience within Insurance services industry essential. Confident in leading large scale data projects, working in product teams.Highly experience in solving business problem and Data Lead Tech solutions. Managing diverse cross functional teams with a strong commercial mindset Qualifications Superior results in Bachelor Degree in highly technical subject area (statistics, actuarial, engineering, maths, programming etc). Post graduate degree in relevant statistical or data science related area (or equivalent & demonstrable online study). Key Capabilities/Technical Competencies (skills, knowledge, technical or specialist capabilities)MandatoryProven ability to engage in a team to achieve individual, team and divisional goals. Consulting, transformation, building proposal & competencyLead and manage largescale teams from people, project management and client management perspective Solid programming experience in Tableau, R, SQL and Python (AI/ML.Experience with AWS or other cloud service.Familiar with data lake platforms (e.g. Snowflake and Databricks)Demonstrated understanding of advanced machine learning algorithms including some exposure to NLP and Image Processing.Expert level understanding of statistical conceptsPlanning and organization Advanced levelDelegation, project management, delivery and productionizing analytical services Advanced High degree of specialist expertise within one or more data science capabilities eg. unstructured data analysis, cognitive/AI solutions (incl. use of API platforms such as IBM Watson, MS Azure, AWS etc), Hadoop/Spark based ecosystems etc.Familiarity with Gen AI conceptsHighly ValuedGood understanding of the Insurance products, industry, market environment, customer segment and key business drivers.Strong knowledge of finance, Budgeting/Forecasting of key business drivers, with the ability to interpret and analyse reports relevant to area of responsibility.Additional Creativity and Innovation - A curious mind that does not accept the status quo. Design thinking experience highly valued. Communication Skills Superior communication skills to be able to co-design solutions with customers. Emotional intelligence and the ability to communicate complex ideas to a range of internal stakeholders. Consulting skills highly valued. Business Focus - Advanced analytical skills will need to be practiced with a significant business focus to ensure practical solutions that deliver tangible value. Strategic Focus - Critically evaluate both company and key business customers' strategy. Also keeping abreast of best practice advanced analytics strategies. Change management capability - ability to recognise, understand and support need for change and anticipated impact on both the team and self. Adaptable and responsive to a continuously changing environment.Customer service - proven commitment to delivering a quality differentiated experience.Time management skills prioritisation of work without supervision.Project management - Ability to plan, organize, implement, monitor, and control projects, ensuring efficient utilisation of technical and business resources, to achieve project objectives.Partnering - Ability to deliver solutions utilising both onshore and offshore resources. Job Location
Posted 1 week ago
3.0 - 8.0 years
18 - 27 Lacs
Bengaluru
Hybrid
You Will: Focus on ML model load testing and creation of E2E test cases Evaluate models scalability and latency by running suites of metrics under different RPS and creating and automating the test cases for individual models, ensuring a smooth rollout of the models Enhance monitoring of model scalability, and handle incident of increased error rate Collaborate with existing machine learning engineers, backend engineers and QA test engineers from cross-functional team You Bring: Advanced degree (Master’s or Ph.D.) in Computer Science/Statistics/Data Science, specializing in machine learning 3+ years of industry experience (pls don’t include years in a research group or R&D team) Strong programming skills in languages such as Java, Python, Scala Hands-on Experience in Databricks, mlFlow, Seldon Excellent problem-solving skills and analytical skills Expertise in recommendation algorithms Experience with software engineering principles, and use of cloud services like AWS Preferred Qualifications: Experience in Kubeflow, Tecton, Jenkins Experience in building and monitoring large-scale online customer-facing ML applications, preferably recommendation systems Experience working with custom ML platforms, feature store, and monitoring ML models Familiarity with best practices in machine learning and software engineering
Posted 1 week ago
1.0 - 2.0 years
11 - 15 Lacs
Hyderabad
Work from Office
About the role We are seeking a highly skilled and experienced Data Architect to join our team. The ideal candidate will have at least 1 2 years of experience in s oftware & data engineering and a nalytics and a proven track record of designing and implementing complex data solutions. Y ou will be expected to design, create, deploy, and manage Blackbauds data architecture. This role has considerable technical influence within the Data Platform, Data Engineering teams, and the Data Intelligence Center of Excellence atBlackbaud. Thisindividual acts as an evangelist for proper data strategy with other teams at Blackbaud and assists with the technical direction, specifically with data, of other projects. What youll be doing Develop and direct the strategy for all aspects of Blackbauds Data and Analytics platforms, products and services Set, communicate and facilitate technical directionmore broadly for the AI Center of Excellence and collaboratively beyond the Center of Excellence Design and develop breakthrough products, services or technological advancements in the Data Intelligence space that expand our business Work alongside product management to craft technical solutions to solve customer business problems. Own the technical data governance practices and ensures data sovereignty, privacy, security and regulatory compliance. Continuously challenging the status quo of how things have been done in the past. Build data access strategy to securely democratize data and enable research, modelling, machine learning and artificial intelligence work. Help define the tools and pipeline patterns our engineers and data engineers use to transform data and support our analytics practice Work in a cross-functional team to translate business needs into data architecture solutions. Ensure data solutions are built for performance, scalability, and reliability. Mentor junior data architects and team members. Keep current on technologydistributed computing, big data concepts and architecture. Promote internally how data within Blackbaud can help change the world. What we want you to have: 1 0 + years of experience in data and advanced analytics At least 8 years of experience working on data technologies in Azure/AWS Experience building modern products and infrastructure Experience working with .Net /Java and Microservice Architecture Expertise in SQL and Python Expertise in SQL Server, Azure Data Services, and other Microsoft data technologies. Expertise in Databricks, Microsoft Fabric Strong understanding of data modeling, data warehousing, data lakes , data mesh and data products. Experience with machine learning Excellent communication and leadership skills. Able to work flexible hours as required by business priorities Ability to deliver software that meets consistent standards of quality, security and operability. Stay up to date on everything Blackbaud, follow us on Linkedin, X, Instagram, Facebook and YouTube Blackbaud is proud to be an equal opportunity employer and is committed to maintaining an inclusive work environment. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, physical or mental disability, age, or veteran status or any other basis protected by federal, state, or local law.
Posted 1 week ago
10.0 - 15.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Novo Nordisk Global Business Services ( GBS) India Department - Global Data & Artificial lntelligence Are you passionate about building scalable data pipelines and optimising data workflowsDo you want to work at the forefront of data engineering, collaborating with cross-functional teams to drive innovationIf so, we are looking for a talented Data Engineer to join our Global Data & AI team at Novo Nordisk. Read on and apply today for a life-changing career! The Position As a Senior Data Engineer, you will play a key role in designing, developing, and main-taining data pipelines and integration solutions to support analytics, Artificial Intelligence workflows, and business intelligence. It includes: Design, implement, and maintain scalable data pipelines and integration solutions aligned with the overall data architecture and strategy. Implement data transformation workflows using modern ETL/ELT approaches while establishing best practices for data engineering, including testing methodologies and documentation. Optimize data workflows by harmonizing and securely transferring data across systems, while collaborating with stakeholders to deliver high-performance solutions for analytics and Artificial Intelligence. Monitoring and maintaining data systems to ensure their reliability. Support data governance by ensuring data quality and consistency, while contributing to architectural decisions shaping the data platform's future. Mentoring junior engineers and fostering a culture of engineering excellence. Qualifications Bachelor’s or master’s degree in computer science, Software Development, Engineering. Possess over 10 years of overall professional experience, including more than 4 years of specialized expertise in data engineering. Experience in developing production-grade data pipelines using Python, Data-bricks and Azure cloud, with a strong foundation in software engineering principles. Experience in the clinical data domain, with knowledge of standards such as CDISC SDTM and ADaM (Good to have). Experience working in a regulated industry (Good to have). About the department You will be part of the Global Data & AI team. Our department is globally distributed and has for mission to harness the power of Data and Artificial Intelligence, integrating it seamlessly into the fabric of Novo Nordisk's operations. We serve as the vital link, weaving together the realms of Data and Artificial Intelligence throughout the whole organi-zation, empowering Novo Nordisk to realize its strategic ambitions through our pivotal initiatives. The atmosphere is fast-paced and dynamic, with a strong focus on collaboration and innovation. We work closely with various business domains to create actionable insights and drive commercial excellence.
Posted 1 week ago
4.0 - 9.0 years
6 - 11 Lacs
Hyderabad, Gurugram, Ahmedabad
Work from Office
About the Role: Grade Level (for internal use): 10 The RoleSenior Software Developer, Application Development The Team We are looking for a highly motivated, enthusiastic, and skilled software engineer for S&P Global Market Intelligence. This developer would help us to accelerate different initiatives within Energy Datasets. The Impact: As a Senior Software Developer, you will be part of the development team that manages and supports the Internal applications supporting Energy Datasets. Whats in it for you: Its a fast-paced agile environment that deals with huge volumes of data, so youll have an opportunity to sharpen your data skills and work on an emerging technology stack. Responsibilities Problems, analyze, and isolate issues.Able to work on projects and tasks independently or with little assistance. Build solutions to develop/support key business needs.Engineer components and common services based on standard development models, languages, and tools. Produce system design documents. Collaborate effectively with technical and non-technical partners. Quickly learn new and internal technologies and help junior teammates. What Were Looking For Bachelors / masters degree in computer science, Information Systems or equivalent. Experience working in Agile Scrum methodology. Minimum of 4 years of experience in application development using Microsoft and Big Data Technologies. We are looking for engineers possessing solid expertise in AWS and Databricks . You will have to work on and lead multiple projects built around these technologies. A minimum of three years hands-on experience in these technologies is MUST. Solid command on Bigdata and Cloud based technologies including Snowflake, OData, python, Scala and Postgres. We also have applications written in Dotnet C#. You will be asked to work on these as per business requirements. An ideal candidate will possess strong Knowledge of object-oriented design, .NET, .NET Core, C#, SQL Server, and design patterns including MVVM. Good knowledge and experience working on multi-tier applications. Experience with Microservices Architecture will be a huge plus Experience building applications using Win forms, WPF, ADO.net, Entity Framework will be a huge plus. Experience working on windows services and scheduler applications using Dotnet and C#. Server platform, stored procedure programming experience using Transact SQL. Experience with troubleshooting applications, debugging, logging, performance monitoring, Must be a team player and quick learner, and willing to take on difficult tasks as and when required. Experience with other technologies including Azure Cloud, Google Cloud, Docker is a plus but not mandatory. About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, andmake decisions with conviction.For more information, visit www.spglobal.com/marketintelligence . Whats In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technologythe right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority Ratings - (Strategic Workforce Planning)
Posted 1 week ago
8.0 - 13.0 years
8 - 12 Lacs
Hyderabad
Work from Office
About the Role: Grade Level (for internal use): 11 S&P Global Market Intelligence The Role: Lead Software Engineer The Team The Market Intelligence Industry Data Solutions business line provides data technology and services supporting acquisition, ingestion, content management, mastering and distribution to power our Financial Institution Group business and customer needs. Focus on platform scale to support business by following a common data lifecycle that accelerates business value. We provide essential intelligence for Financial Services, Real Estate and Insurance industries The Impact The FIG Data Engineering team will be responsible for implementing & maintaining services and/or tools to support existing feed systems which allows users to consume FIG datasets and make FIG data available to a data fabric for wider consumption and processing within the company. Whats in it for you Ability to work with global stakeholders and working on latest tools and technologies. Responsibilities Build new data acquisition and transformation pipelines using big data and cloud technologies. Work with the broader technology team, including information architecture and data fabric teams to align pipelines with the lodestone initiative. What Were Looking For: Bachelors in computer science or equivalent with at least 8+ years of professional software work experience Experience with Big Data platforms such as Apache Spark, Apache Airflow, Google Cloud Platform, Apache Hadoop. Deep understanding of REST, good API design, and OOP principles Experience with object-oriented/object function scripting languages Python , C#, Scala, etc. Good working knowledge of relational SQL and NoSQL databases Experience in maintaining and developing software in production utilizing cloud-based tooling ( AWS, Docker & Kubernetes, Okta) Strong collaboration and teamwork skills with excellent written and verbal communications skills Self-starter and motivated with ability to work in a fast-paced software development environment Agile experience highly desirable Experience in Snowflake, Databricks will be a big plus. Return to Work Have you taken time out for caring responsibilities and are now looking to return to workAs part of our Return-to-Work initiative (link to career site page when available), we are encouraging enthusiastic and talented returners to apply and will actively support your return to the workplace. About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, andmake decisions with conviction.For more information, visit www.spglobal.com/marketintelligence . Whats In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technologythe right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.2 - Middle Professional Tier II (EEO Job Group), SWP Priority Ratings - (Strategic Workforce Planning)
Posted 1 week ago
3.0 - 6.0 years
5 - 8 Lacs
Hyderabad
Work from Office
About the Role: Grade Level (for internal use): 09 We are seeking a skilled and motivated Application Operations Engineer for an SRE role with Java, React JS and Spring boot skillset along with expertise in Data Bricks, particularly with Oracle integration, to join our dynamic SRE team. The ideal candidate should have 3 to 6 years of experience in supporting robust web applications using Java, React JS and Spring boot with a strong background in managing and optimizing data workflows leveraging Oracle databases. The incumbent will be responsible for supporting applications, troubleshooting issues, providing RCAs and suggestive fixes by managing continuous integration and deployment pipelines, automating processes, and ensuring systems reliability, maintainability and stability. Responsibilities The incumbent will be working in CI/CD, handle Infrastructure issues, know how on supporting Operations and maintain user-facing features using React JS, Spring boot & Java Has ability to support reusable components and front-end libraries for future use Partner with development teams to improve services through rigorous testing and release procedures. Has willingness to learn new tools and technologies as per the project demand. Ensure the technical feasibility of UI/UX designs Optimize applications for maximum speed and scalability Collaborate with other team members and stakeholders Work closely with data engineers to ensure smooth data flow and integration. Create and maintain documentation for data processes and workflows. Troubleshoot and resolve issues related to data integrity and performance. Good to have working knowledge on Tomcat App server and Apache web server, Oracle, Postgres Command on Linux & Unix. Self-driven individual : Bachelors degree in computer science engineering, or a related field 3-6 years of professional experience Proficiency in Advanced Java, JavaScript, including DOM manipulation and the JavaScript object model Experience with popular React JS workflows (such as Redux, MobX, Flux) Familiarity with RESTful APIs Experience with cloud platforms such as AWS and Azure Knowledge of CI/CD pipelines and DevOps practices Experience with data engineering tools and technologies, particularly Data Bricks Proficiency in Oracle database technologies and SQL queries Excellent problem-solving skills and attention to detail Ability to work independently and as part of a team Good verbal and written communication skills Familiarity with ITSM processes like Incident, Problem and Change Management using ServiceNow (preferable) Ability to work in shift manner. Grade - 09 Location - Hyderabad Hybrid Mode - twice a week work from office Shift Time - 6:30 am to 1 pm OR 2 pm to 10 pm IST About S&P Global Ratings At S&P Global Ratings, our analyst-driven credit ratings, research, and sustainable finance opinions provide critical insights that are essential to translating complexity into clarity so market participants can uncover opportunities and make decisions with conviction. By bringing transparency to the market through high-quality independent opinions on creditworthiness, we enable growth across a wide variety of organizations, including businesses, governments, and institutions. S&P Global Ratings is a division of S&P Global (NYSESPGI). S&P Global is the worlds foremost provider of credit ratings, benchmarks, analytics and workflow solutions in the global capital, commodity and automotive markets. With every one of our offerings, we help many of the worlds leading organizations navigate the economic landscape so they can plan for tomorrow, today.For more information, visit www.spglobal.com/ratings Whats In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technologythe right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. S&P Global has a Securities Disclosure and Trading Policy (the Policy) that seeks to mitigate conflicts of interest by monitoring and placing restrictions on personal securities holding and trading. The Policy is designed to promote compliance with global regulations. In some Divisions, pursuant to the Policys requirements, candidates at S&P Global may be asked to disclose securities holdings. Some roles may include a trading prohibition and remediation of positions when there is an effective or potential conflict of interest. Employment at S&P Global is contingent upon compliance with the Policy. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group)
Posted 1 week ago
4.0 - 7.0 years
14 - 18 Lacs
Mumbai
Work from Office
About NCR Atleos Job Title : Data Analytics Manager Internal Audit Job Location : Mumbai, India Reports to : Executive Director, Internal Audit Position Summary At NCR Atleos, our Internal Audit Department (IAD) purpose is to help enable competent and informed decisions to add value and improve operations, while contributing meaningfully to Board and organizational confidence. We are indispensable business partners, with a brand focused on insight, impact and excellence. We believe that everything we do is to enhance value, provide insights, and instill confidence. To do this, we must be relevant, connected, flexible, and courageous. NCR Atleos IAD is seeking a Data Analytics Manager who will play a critical role in enhancing the Internal Audit function through data-driven insights, analytics, and process optimization. This role will report directly to the Executive Director, Internal Audit. Key Areas of Responsibility: Data Analytics Strategy & Execution Develop and implement data analytics methodologies to support the internal audit function; Design and execute advanced data analysis scripts and models to identify trends, anomalies, and potential risks; Partner with the audit teams to integrate analytics into audit planning, execution, and reporting. Audit Support Collaborate with the Director of Internal Audit to support audits in the areas of technology, information security, business processes, and financial operations; Extract, analyze, and interpret data from various enterprise systems to support audit objectives; Provide insights that enhance audit outcomes and help identify areas for operational improvement. Data Visualization & Reporting Create clear, actionable, and visually compelling reports and dashboards to communicate audit findings to stakeholders and the Audit Committee; Develop templates and standards for data analytics in audit work products to ensure consistency and clarity. Collaboration & Training Work closely with IT, Finance, Operations, and other business units to gather data and validate insights; Mentor and train other Internal Audit team members on leveraging data analytics tools and techniques; Build partnerships across the organization to foster a culture of data-driven decision-making. Technology & Tools Identify, evaluate, and implement data analytics tools and technologies to improve audit processes; Stay updated on emerging technologies and trends in data analytics and audit methodologies; Support automation initiatives to enhance efficiency within the Internal Audit department. Compliance & Risk Management Ensure data analytics initiatives align with organizational compliance requirements and internal audit standards; Monitor and evaluate data integrity, system reliability, and process controls across business units. Continuous Improvement: Stay abreast of emerging technologies, audit methodologies, and regulatory changes. Support the Executive Director in overseeing the use of technology within the audit function, including data analytics and audit management software, to enhance audit quality and efficiency. Contribute to innovation and improvements to the IT audit process, controls and the overall Internal Audit Department. Qualifications: Education Bachelors or masters in computer science, IT, Engineering, Data Science, Econometrics, or related fields. Experience Proven data analytics experience in internal audit or risk management, with strong analytical, problem-solving, and project management skills. Statistical Methods Proficient in regressions, time series, clusters, and decision trees. Programming Skilled in JavaScript, Python, R, PHP, .NET, SQL. Databases Expertise in relational databases, data warehouses, ETL, UI tools, and query optimization. Visualization Proficient in Tableau, Power BI, and advanced MS Office skills. Cloud Platforms Experience with Microsoft Azure, Data Bricks, Hadoop, or similar platforms. Project Management Experience managing analytics projects and stakeholder management. Communication Ability to convey complex data insights to non-technical stakeholders. Leadership Demonstrated leadership and team mentoring skills. Cultural Sensitivity Ability to work effectively in a global environment. Languages Proficiency in multiple languages is an advantage. Ethics High ethical standards and commitment to audit integrity. Confidentiality Ensuring the security of sensitive data. Team Environment Positive attitude within a dynamic team setting. EEO Statement NCR Atleos is an equal-opportunity employer. It is NCR Atleos policy to hire, train, promote, and pay associates based on their job-related qualifications, ability, and performance, without regard to race, color, creed, religion, national origin, citizenship status, sex, sexual orientation, gender identity/expression, pregnancy, marital status, age, mental or physical disability, genetic information, medical condition, military or veteran status, or any other factor protected by law. Statement to Third Party Agencies To ALL recruitment agenciesNCR Atleos only accepts resumes from agencies on the NCR Atleos preferred supplier list. Please do not forward resumes to our applicant tracking system, NCR Atleos employees, or any NCR Atleos facility. NCR Atleos is not responsible for any fees or charges associated with unsolicited resumes.
Posted 1 week ago
8.0 - 13.0 years
20 - 22 Lacs
Pune
Hybrid
Job Responsibilities: - Design, construct, and maintain scalable data management systems using Azure Databricks, ensuring they meet end-user expectations. Supervise the upkeep of existing data infrastructure workflows to ensure continuous service delivery. Create data processing pipelines utilizing Databricks Notebooks, Spark SQL, Python and other Databricks tools. Oversee and lead the module through planning, estimation, implementation, monitoring and tracking. Desired Skills and Experience :- Over 8 + years of experience in data engineering, with expertise in Azure Data Bricks, MSSQL, LakeFlow, Python and supporting Azure technology. Design, build, test, and maintain highly scalable data management systems using Azure Databricks. Create data processing pipelines utilizing Databricks Notebooks, Spark SQL. Integrate Azure Databricks with other Azure services like Azure Data Lake Storage, Azure SQL Data Warehouse. Design and implement robust ETL pipelines using ADF and Databricks, ensuring data quality and integrity. Collaborate with data architects to implement effective data models and schemas within the Databricks environment. Develop and optimize PySpark/Python code for data processing tasks. Assist stakeholders with data-related technical issues and support their data infrastructure needs. Develop and maintain documentation for data pipeline architecture, development processes, and data governance. Data Warehousing: In-depth knowledge of data warehousing concepts, architecture, and implementation, including experience with various data warehouse platforms. Extremely strong organizational and analytical skills with strong attention to detail Strong track record of excellent results delivered to internal and external clients. Excellent problem-solving skills, with ability to work independently or as part of team. Strong communication and interpersonal skills, with ability to effectively engage with both technical and non-technical stakeholders. Able to work independently without the needs for close supervision and collaboratively as part of crossteam efforts.
Posted 1 week ago
13.0 - 18.0 years
44 - 48 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
About KPI Partners. KPI Partners is a leading provider of data-driven insights and innovative analytics solutions. We strive to empower organizations to harness the full potential of their data, driving informed decision-making and business success. We are seeking an enthusiastic and experienced professional to join our dynamic team as an Associate Director / Director in Data Engineering & Modeling. We are looking for a highly skilled and motivated Associate Director/ Director – Data Engineering & Solution Architecture to support the strategic delivery of modern data platforms and enterprise analytics solutions. This is a hands-on leadership role that bridges technology and business, helping design, develop, and operationalize scalable cloud-based data ecosystems. You will work closely with client stakeholders, internal delivery teams, and practice leadership to drive the architecture, implementation, and best practices across key initiatives. Key Responsibilities Solution Design & Architecture : Collaborate on designing robust, secure, and cost-efficient data architectures using cloud-native platforms such as Databricks, Snowflake, Azure Data Services, AWS, and Incorta. Data Engineering Leadership : Oversee the development of scalable ETL/ELT pipelines using ADF, Airflow, dbt, PySpark, and SQL, with an emphasis on automation, error handling, and auditing. Data Modeling & Integration : Design data models (star, snowflake, canonical), resolve dimensional hierarchies, and implement efficient join strategies. API-based Data Sourcing : Work with REST APIs for data acquisition — manage pagination, throttling, authentication, and schema evolution. Platform Delivery : Support end-to-end project lifecycle — from requirement analysis and PoCs to development, deployment, and handover. CI/CD & DevOps Enablement : Implement and manage CI/CD workflows using Git, Azure DevOps, and related tools to enforce quality and streamline deployments. Mentoring & Team Leadership : Mentor senior engineers and developers, conduct code reviews, and promote best practices across engagements. Client Engagement : Interact with clients to understand needs, propose solutions, resolve delivery issues, and maintain high satisfaction levels. Required Skills & Qualifications 14+ years of experience in Data Engineering, BI, or Solution Architecture roles. Strong hands-on expertise in one of the cloud like Azure, Databricks, Snowflake, and AWS (EMR). Proficiency in Python, SQL, and PySpark for large-scale data transformation. Proven skills in developing dynamic and reusable data pipelines (metadata-driven preferred). Strong grasp of data modeling principles and modern warehouse design. Experience with API integrations, including error handling and schema versioning. Ability to design modular and scalable solutions aligned with business goals. Solid communication and stakeholder management skills. Preferred Qualifications Exposure to data governance, data quality frameworks, and security best practices. Certifications in Azure Data Engineering, Databricks, or Snowflake are a plus. Experience working with Incorta and building materialized views or delta-based architectures. Experience working with enterprise ERP systems. Exposure leading data ingestion from Oracle Fusion ERP and other enterprise systems. What We Offer Opportunity to work on cutting-edge data transformation projects for global enterprises Mentorship from senior leaders and a clear path to Director-level roles Flexible work environment and a culture that values innovation, ownership, and growth Competitive compensation and professional development support
Posted 1 week ago
4.0 - 7.0 years
7 - 14 Lacs
Pune, Mumbai (All Areas)
Work from Office
Job Profile Description Create and maintain highly scalable data pipelines across Azure Data Lake Storage, and Azure Synapse using Data Factory, Databricks and Apache Spark/Scala Responsible for managing a growing cloud-based data ecosystem and reliability of our Corporate datalake and analytics data mart Contribute to the continued evolution of Corporate Analytics Platform and Integrated data model. Be part of Data Engineering team in all phases of work including analysis, design and architecture to develop and implement cutting-edge solutions. Negotiate and influence changes outside of the team that continuously shape and improve the Data strategy 4+ years of experience implementing analytics data Solutions leveraging Azure Data Factory, Databricks, Logic Apps, ML Studio, Datalake and Synapse Working experience with Scala, Python or R Bachelors degree or equivalent experience in Computer Science, Information Systems, or related disciplines.
Posted 1 week ago
7.0 - 10.0 years
8 - 15 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Mandatory Skills AWS, Databricks Detailed job description - Skill Set: Looking for 10+ Y / highly experienced and deeply hands-on Data Architect to lead the design, build, and optimization of our data platforms on AWS and Databricks. This role requires a strong blend of architectural vision and direct implementation expertise, ensuring scalable, secure, and performant data solutions from concept to production. Strong hand on exp in data engineering/architecture, hands-on architectural and implementation experience on AWS and Databricks, Schema modeling . AWS: Deep hands-on expertise with key AWS data services and infrastructure. Databricks: Expert-level hands-on development with Databricks (Spark SQL, PySpark), Delta Lake, and Unity Catalog. Coding: Exceptional proficiency in Python , Pyspark , Spark , AWS Services and SQL. Architectural: Strong data modeling and architectural design skills with a focus on practical implementation. Preferred: AWS/Databricks certifications, experience with streaming technologies, and other data tools. Design & Build: Lead and personally execute the design, development, and deployment of complex data architectures and pipelines on AWS (S3, Glue, Lambda, Redshift, etc.) and Databricks (PySpark/Spark SQL, Delta Lake, Unity Catalog). Databricks Expertise: Own the hands-on development, optimization, and performance tuning of Databricks jobs, clusters, and notebooks.
Posted 1 week ago
2.0 - 4.0 years
7 - 9 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
POSITION Senior Data Engineer / Data Engineer LOCATION Bangalore/Mumbai/Kolkata/Gurugram/Hyd/Pune/Chennai EXPERIENCE 2+ Years JOB TITLE: Senior Data Engineer / Data Engineer OVERVIEW OF THE ROLE: As a Data Engineer or Senior Data Engineer, you will be hands-on in architecting, building, and optimizing robust, efficient, and secure data pipelines and platforms that power business-critical analytics and applications. You will play a central role in the implementation and automation of scalable batch and streaming data workflows using modern big data and cloud technologies. Working within cross-functional teams, you will deliver well-engineered, high-quality code and data models, and drive best practices for data reliability, lineage, quality, and security. HASHEDIN BY DELOITTE 2025 Mandatory Skills: Hands-on software coding or scripting for minimum 3 years Experience in product management for at-least 2 years Stakeholder management experience for at-least 3 years Experience in one amongst GCP, AWS or Azure cloud platform Key Responsibilities: Design, build, and optimize scalable data pipelines and ETL/ELT workflows using Spark (Scala/Python), SQL, and orchestration tools (e.g., Apache Airflow, Prefect, Luigi). Implement efficient solutions for high-volume, batch, real-time streaming, and event-driven data processing, leveraging best-in-class patterns and frameworks. Build and maintain data warehouse and lakehouse architectures (e.g., Snowflake, Databricks, Delta Lake, BigQuery, Redshift) to support analytics, data science, and BI workloads. Develop, automate, and monitor Airflow DAGs/jobs on cloud or Kubernetes, following robust deployment and operational practices (CI/CD, containerization, infra-as-code). Write performant, production-grade SQL for complex data aggregation, transformation, and analytics tasks. Ensure data quality, consistency, and governance across the stack, implementing processes for validation, cleansing, anomaly detection, and reconciliation. Collaborate with Data Scientists, Analysts, and DevOps engineers to ingest, structure, and expose structured, semi-structured, and unstructured data for diverse use-cases. Contribute to data modeling, schema design, data partitioning strategies, and ensure adherence to best practices for performance and cost optimization. Implement, document, and extend data lineage, cataloging, and observability through tools such as AWS Glue, Azure Purview, Amundsen, or open-source technologies. Apply and enforce data security, privacy, and compliance requirements (e.g., access control, data masking, retention policies, GDPR/CCPA). Take ownership of end-to-end data pipeline lifecycle: design, development, code reviews, testing, deployment, operational monitoring, and maintenance/troubleshooting. Contribute to frameworks, reusable modules, and automation to improve development efficiency and maintainability of the codebase. Stay abreast of industry trends and emerging technologies, participating in code reviews, technical discussions, and peer mentoring as needed. Skills & Experience: Proficiency with Spark (Python or Scala), SQL, and data pipeline orchestration (Airflow, Prefect, Luigi, or similar). Experience with cloud data ecosystems (AWS, GCP, Azure) and cloud-native services for data processing (Glue, Dataflow, Dataproc, EMR, HDInsight, Synapse, etc.). © HASHEDIN BY DELOITTE 2025 Hands-on development skills in at least one programming language (Python, Scala, or Java preferred); solid knowledge of software engineering best practices (version control, testing, modularity). Deep understanding of batch and streaming architectures (Kafka, Kinesis, Pub/Sub, Flink, Structured Streaming, Spark Streaming). Expertise in data warehouse/lakehouse solutions (Snowflake, Databricks, Delta Lake, BigQuery, Redshift, Synapse) and storage formats (Parquet, ORC, Delta, Iceberg, Avro). Strong SQL development skills for ETL, analytics, and performance optimization. Familiarity with Kubernetes (K8s), containerization (Docker), and deploying data pipelines in distributed/cloud-native environments. Experience with data quality frameworks (Great Expectations, Deequ, or custom validation), monitoring/observability tools, and automated testing. Working knowledge of data modeling (star/snowflake, normalized, denormalized) and metadata/catalog management. Understanding of data security, privacy, and regulatory compliance (access management, PII masking, auditing, GDPR/CCPA/HIPAA). Familiarity with BI or visualization tools (PowerBI, Tableau, Looker, etc.) is an advantage but not core. Previous experience with data migrations, modernization, or refactoring legacy ETL processes to modern cloud architectures is a strong plus. Bonus: Exposure to open-source data tools (dbt, Delta Lake, Apache Iceberg, Amundsen, Great Expectations, etc.) and knowledge of DevOps/MLOps processes. Professional Attributes: Strong analytical and problem-solving skills; attention to detail and commitment to code quality and documentation. Ability to communicate technical designs and issues effectively with team members and stakeholders. Proven self-starter, fast learner, and collaborative team player who thrives in dynamic, fast-paced environments. Passion for mentoring, sharing knowledge, and raising the technical bar for data engineering practices. Desirable Experience: Contributions to open source data engineering/tools communities. Implementing data cataloging, stewardship, and data democratization initiatives. Hands-on work with DataOps/DevOps pipelines for code and data. Knowledge of ML pipeline integration (feature stores, model serving, lineage/monitoring integration) is beneficial. © HASHEDIN BY DELOITTE 2025 EDUCATIONAL QUALIFICATIONS: Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or related field (or equivalent experience). Certifications in cloud platforms (AWS, GCP, Azure) and/or data engineering (AWS Data Analytics, GCP Data Engineer, Databricks). Experience working in an Agile environment with exposure to CI/CD, Git, Jira, Confluence, and code review processes. Prior work in highly regulated or large-scale enterprise data environments (finance, healthcare, or similar) is a plus.
Posted 1 week ago
3.0 - 8.0 years
5 - 10 Lacs
Mumbai
Work from Office
This position participates in the support of batch and real-time data pipelines utilizing various data analytics processing frameworks in support of data science practices for Marketing and Finance business units. This position supports the integration of data from various data sources, as well as performs extract, transform, load (ETL) data conversions, and facilitates data cleansing and enrichment. This position performs full systems life cycle management activities, such as analysis, technical requirements, design, coding, testing, implementation of systems and applications software. This position participates and contributes to synthesizing disparate data sources to support reusable and reproducible data assets. RESPONSIBILITIES Supervises and supports data engineering projects and builds solutions by leveraging a strong foundational knowledge in software/application development.He/she is hands on. Develops and delivers data engineering documentation. Gathers requirements, defines the scope, and performs the integration of data for data engineering projects. Recommends analytic reporting products/tools and supports the adoption of emerging technology. Performs data engineering maintenance and support. Provides the implementation strategy and executes backup, recovery, and technology solutions to perform analysis. Performs ETL tool capabilities with the ability to pull data from various sources and perform a load of the transformed data into a database or business intelligence platform. Codes using programming language used for statistical analysis and modeling such as Python/Spark REQUIRED QUALIFICATIONS Literate in the programming languages used for statistical modeling and analysis, data warehousing and Cloud solutions, and building data pipelines. Proficient in developing notebooks in Data bricks using Python and Spark and Spark SQL. Strong understanding of a cloud services platform (e.g., GCP, or AZURE, or AWS) and all the data life cycle stages. Azure is preferred. Proficient in using Azure Data Factory and other Azure features such as LogicApps. Preferred to have knowledge of Delta lake, Lakehouse and Unity Catalog concepts. Strong understanding of cloud-based data lake systems and data warehousing solutions. Has used AGILE concepts for development, including KANBAN and Scrums Strong understanding of the data interconnections between organizations operational and business functions. Strong understanding of the data life cycle stages - data collection, transformation, analysis, storing the data securely, providing data accessibility Strong understanding of the data environment to ensure that it can scale for the following demandsThroughput of data, increasing data pipeline throughput, analyzing large amounts of data, Real-time predictions, insights and customer feedback, data security, data regulations, and compliance. Strong knowledge of algorithms and data structures, as well as data filtering and data optimization. Strong understanding of analytic reporting technologies and environments (e.g., Power BI, Looker, Qlik, etc.) Understanding of distributed systems and the underlying business problem being addressed, as well as guides team members on how their work will assist by performing data analysis and presenting findings to the stakeholders. Bachelors degree in MIS, mathematics, statistics, or computer science, international equivalent, or equivalent job experience. REQUIRED S 3 years of experience with Databricks, Apache Spark, Python, and SQL PREFERRED S DeltaLake Unity Catalog, R, Scala, Azure Logic Apps, Cloud Services Platform (e.g., GCP, or AZURE, or AWS), and AGILE concepts.
Posted 1 week ago
1.0 - 3.0 years
8 - 16 Lacs
Noida
Work from Office
-Proficient in Java, Spark, Kafka for real-time processing -Skilled in HBase for NoSQL on on-prem clusters -Strong in data modeling for scalable NoSQL systems -Built ETL pipelines using Spark for transformation -Knowledge of Hadoop cluster management Required Candidate profile -Bachelor’s in CS or related field -Familiar with version control systems, particularly Git -Knowledge of AWS, Azure, or GCP -Understanding of distributed databases, especially HBase
Posted 1 week ago
2.0 - 6.0 years
6 - 14 Lacs
Hyderabad, Gurugram
Work from Office
We are looking for a skilled Data Engineer with strong expertise in Python, PySpark, SQL, and AWS to join our data engineering team. The ideal candidate will be responsible for building scalable data pipelines, transforming large datasets, and enabling data-driven decision-making across the organization. Role & responsibilities Data Pipeline Development: Design, build, and maintain scalable data pipelines for ingesting, processing, and transforming large datasets from diverse sources into usable formats. Performance Optimization: Optimize data processing and storage systems for cost efficiency and high performance, including managing compute resources and cluster configurations. Automation and Workflow Management: Automate data workflows using tools like Airflow, Databricks APIs, and other orchestration technologies to streamline data ingestion, processing, and reporting tasks. Data Quality and Validation: Implement data quality checks, validation rules, and transformation logic to ensure the accuracy, consistency, and reliability of data. Cloud Platform Management: Manage and optimize cloud infrastructure (AWS, Databricks) for data storage, processing, and compute resources, ensuring seamless data operations. Preferred candidate profile Strong proficiency in Python for scripting and data manipulation. Hands-on experience with PySpark for distributed data processing. Proficient in writing complex SQL queries for large-scale data extraction and transformation. Solid understanding and experience with AWS cloud ecosystem (especially S3, Glue, EMR, Lambda). Knowledge of data warehousing, data lakes, and ETL/ELT processes. Familiarity with version control tools like Git and workflow orchestration tools (e.g., Airflow) is a plus.
Posted 1 week ago
5.0 - 10.0 years
20 - 27 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
We're Hiring | Platform Engineer @ Xebia Locations: Bangalore | Bhopal | Chennai | Gurgaon | Hyderabad | Jaipur | Pune Immediate Joiners (015 Days Notice Period Only) Valid Passport is Mandatory Xebia is on the lookout for passionate Platform Engineers with a strong mix of Azure Infrastructure as Code (IaC) Terraform and Data Engineering expertise to join our Cloud Data Platform team. What You'll Do: Design & deploy scalable Azure infrastructure using Terraform Build & optimize ETL/ELT pipelines using Azure Data Factory, Databricks, Event Hubs Automate infra provisioning, enforce security/governance via IaC Support CI/CD workflows with Git , Azure DevOps Work with VNETs, Key Vaults, Storage Accounts, Monitoring Tools Use Python, SQL, Spark for data transformation & processing What Were Looking For: Hands-on experience in Azure IaC + Data Engineering Strong in scripting, automation, & monitoring Familiarity with real-time & batch processing Azure certifications (Data Engineer / DevOps) are a plus Must have a valid passport Interested? Send your CV along with the following details to: vijay.s@xebia.com Required Details: Full Name Total Experience Current CTC Expected CTC Current Location Preferred Location Notice Period / Last Working Day (if serving notice) Primary Skill Set LinkedIn Profile URL Do you have a valid passport? (Yes/No) Please apply only if you haven't applied recently or aren't already in the process with any open Xebia roles. Let’s build the future of cloud-native data platforms together!
Posted 1 week ago
8.0 - 12.0 years
2 - 2 Lacs
Pune
Work from Office
Skill Spark, Java, ADF, ADB Role Tech Lead Experience level 8-10 yrs. Project Description min 50 words High proficiency and 8 10 years of experience in designing/developing data analytics and data warehouse solutions. Writing Java and Python based scripts on Spark Experience in designing large data distribution, integration with service-oriented architecture and/or data warehouse solutions, Data Lake solution using Azure Databricks with large and multi-format data Ability to translate working solution into implementable working package using Azure platform Good understanding on Azure storage Gen2 Hands on experience with Azure stack (minimum 5 years) Spark Java Azure Databricks Azure Data Factory require experience in writng Spark code in Java Proficient coding experience using Spark (Scala/Python), T-SQL Understanding around the services related to Azure Analytics, Azure SQL, Azure function app, logic app Should be able to demonstrate a constant and quick learning ability and to handle pressure situations without compromising on quality Well organized and able to manage multiple projects in a fast-paced demanding environment. Attention to detail and quality; excellent problem solving and communication skills. Ability and willingness to learn new tools and applications. Work Location with zip code Infosys Phase III, Pune Role & responsibilities Preferred candidate profile
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane