Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 13.0 years
6 - 9 Lacs
Hyderabad
Work from Office
Data Engineering Team As a Lead Data Engineer for India, you will be accountable for leading the technical aspects of product engineering by being hands on, working on the enhancement, maintenance and support of the product on which your team is working, within your technology area. You will be responsible for your own hands-on coding, provide the design thinking and design solutions, ensuring the quality of your teams output, representing your team in product-level technical forums and ensuring your team provides technical input to and aligns with the overall product road-map. How will you make an impact? You will work with Engineers in other technology areas to define the overall technical direction for the product on alignment with Groups technology roadmap, standards and frameworks, with product owners and business stakeholders to shape the product's delivery roadmap and with support teams to ensure its smooth operation. You will be accountable for the overall technical quality of the work produced by India that is in line with the expectation of the stakeholders, clients and Group. You will also be responsible for line management of your team of Engineers, ensuring that they perform to the expected levels and that their career development is fully supported. Key responsibilities o Produce Quality Code o Code follows team standards, is structured to ensure readability and maintainability and goes through review smoothly, even for complex changes o Designs respect best practices and are favourably reviewed by peers o Critical paths through code are covered by appropriate tests o High-level designs / architectures align to wider technical strategy, presenting reusable APIs where possible and minimizing system dependencies o Data updates are monitored and complete within SLA o Technical designs follow team and group standards and frameworks, is structured to ensure reusability, extensibility and maintainability and goes through review smoothly, even for complex changes o Designs respect best practices and are favourably reviewed by peers o High-level designs / architectures align to wider technical strategy, presenting reusable APIs where possible and minimizing system dependencies o Estimates are consistently challenging, but realistic o Most tasks are delivered within estimate o Complex or larger tasks are delivered autonomously o Sprint goals are consistently achieved o Demonstrate commitment to continuous improvement of squad activities o The product backlog is consistently well-groomed, with a responsible balance of new features and technical debt mitigation o Other Engineers in the Squad feel supported in their development o Direct reports have meaningful objectives recorded in Quantium's Performance Portal, and understand how those objectives relate to business strategy o Direct reports' career aspirations are understood / documented, with action plans in place to move towards those goals o Direct reports have regular catch-ups to discuss performance, career development and their ongoing happiness / engagement in their role o Any performance issues are identified, documented and agreed, with realistic remedial plans in place o Squad Collaboration o People Management o Produce Quality Technical Design o Operate at high level of productivity Key activities Build technical product/application engineering capability in the team by that is in line with the Groups technical roadmap, standards and frameworks Write polished code, aligned to team standards, including appropriate unit / integration tests Review code and test cases produced by others, to ensure changes satisfy the associated business requirement, follow best practices, and integrate with the existing code-base Provide constructive feedback to other team members on quality of code and test cases Collaborate with other Lead / Senior Engineers to produce high-level designs for larger pieces of work Validate technical designs and estimates produced by other team members Merge reviewed code into release branches, resolving any conflicts that arise, and periodically deploy updates to production and non-production environments Troubleshoot production problems and raise / prioritize bug tickets to resolve any issues Proactively monitor system health and act to report / resolve any issues Provide out of hours support for periodic ETL processes, ensuring SLAs are met Work with business stakeholders and other leads to define and estimate new epics Contribute to backlog refinement sessions, helping to break down each epic into a collection of smaller user stories that will deliver the overall feature Work closely with Product Owners to ensure the product backlog is prioritized to maximize business value and manage technical debt Lead work breakdown sessions to define the technical tasks required to implement each user story Contribute to sprint planning sessions, ensuring the team takes a realistic but challenging amount of work into each sprint and each team member will be productively occupied Contribute to the teams daily stand-up, highlighting any delays or impediments to progress and proposing mitigation for those issues Contribute to sprint review and sprint retro sessions, to maintain a culture of continuous improvement within the team Coach / mentor more junior Engineers to support their continuing development Set and periodically review delivery and development objectives for direct reports Identify each direct reports longer-term career objectives and, as far as possible, factor this into work assignments Hold fortnightly catch-ups with direct reports to review progress against objectives, assess engagement and give them the opportunity to raise concerns about the product or team Work through the annual performance review process for all team members Conduct technical interviews as necessary to recruit new Engineers The superpowers youll be bringing to the team: 8+ years of experience in design, develop, and implement end-to-end data solutions (storage, integration, processing, access) in Google Cloud Platform (GCP) or similar cloud platforms. 2. Strong experience with SQL 3. Values delivering high-quality, peer-reviewed, well-tested code 4. Create ETL/ELT pipelines that transform and process terabytes of structured and unstructured data in real-time 5. Knowledge of DevOps functions and to contribute to CI / CD pipelines 6. Strong knowledge of data warehousing and data modelling and techniques like dimensional modelling etc 7. Strong hands-on experience with BigQuery/Snowflake, Airflow/Argo, Dataflow, Data catalog, VertexAI, Pub/Sub etc or equivalent products in other cloud platforms 8. Solid grip over programming languages like Python or Scala 9. Hands on experience in manipulating SPARK at scale with true in-depth knowledge of SPARK API 10. Experience working with stakeholders and mentoring experience for juniors in the team is good to have 11. Recognized as a go-to person for high-level designs and estimations 12. Experience working with source control tools (GIT preferred) with good understanding of branching / merging strategies 13. Experience in Kubernetes and Azure will be an advantage 14. Understanding of GNU/Linux systems and Bash/scripting 15. Bachelors degree in Computer Science, Information Technology or a related discipline 16. Comfortable working in a fast moving, agile development environment 17. Excellent problem solving / analytical skills 18. Good written / verbal communication skills 19. Commercially aware, with the ability to work with a diverse range of stakeholders 20. Enthusiasm for coaching and mentoring junior engineers 21. Experience in lading teams, including line management responsibilities What could your Quantium Experience look like? Working at Quantium will allow you to challenge your imagination. You will get to solve complex problems using rigor, precision and by asking great questions but it also means you can think big, outside the box and push your problem-solving skills to the max. By joining the Quantium team, youll get to: Forge your path: So many of our team have moved around different teams or offices. Youll be in the drivers seat, and we empower you to make your career your own. Find your kind: Embrace diversity and connect with your tribe (think foodies, dog lovers, readers, or runners). Make an impact: Leave your mark. Your contributions resonate, regardless of your role or rank. On top of the Quantium Experience, you will enjoy a range of great benefits that go beyond the ordinary. Some of these include: Flexible work arrangements : Achieve work life balance at your own pace with hybrid and flexible work arrangements. Continuous learning : Our vision is empowering analytics talent to thrive. The Analytics Community fosters the development of individuals, thought leadership and technical excellence at Quantium through building strong connections, fostering collaboration, and co-creation of best practice. Remote working : Embrace the opportunity to work outside of your assigned home location for up to 2 months every year.
Posted 3 weeks ago
1.0 - 6.0 years
3 - 8 Lacs
Noida
Work from Office
About the Role: Grade Level (for internal use): 09 S&P Global – Dow Jones Indices About the Role Software Developer - Enterprise Data Management The Team We are seeking a highly skilled Enterprise Data Management (EDM) Software Engineer to join our dynamic team. This role will focus on building, enhancing, and optimizing our enterprise data management solutions, ensuring efficient data processing, governance, and integration across multiple platforms. The ideal candidate will have a strong background in data engineering, software development, and enterprise data architecture. Responsibilities and Impact : Design, develop, and maintain robust EDM solutions to support business needs. Implement data ingestion, validation, and transformation pipelines for large-scale structured and unstructured data. Develop and optimize SQL databases for data storage, retrieval, and reporting. Ensure high data quality and compliance with regulatory and security requirements. Collaborate analysts, and business stakeholders to design scalable data solutions. Automate data workflows, monitoring, and alerting to improve system performance and resilience. Work on system integrations, including APIs, ETL/ELT processes, and cloud-based data services. Troubleshoot and resolve data-related technical issues, ensuring high availability and reliability. Stay up to date with industry trends and emerging technologies in data management and cloud computing. What We’re Looking For: Basic Required Qualifications : Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or a related field. 1 to 10 years of experience in software development with a focus on enterprise data management. Strong proficiency in SQL and Python for data processing and automation. Experience with relational and NoSQL databases. Hands-on experience with ETL/ELT tools and frameworks (e.g., EDM, Apache Informatica). Familiarity with AWS, and their data services. Strong understanding of data governance, metadata management, and data security best practices . Excellent problem-solving skills, analytical mindset, and ability to work in an agile environment. Effective communication skills to collaborate with cross-functional teams. We are a global team, and the candidate should be flexible in their work hours. Additional Preferred Qualifications : Experience with data modeling, master data management (MDM), and data lineage tools. Knowledge of financial or market data processing and corporate actions is a plus. Experience working in a DevOps environment with CI/CD pipelines. About S&P Global Dow Jones Indic e s At S&P Dow Jones Indices, we provide iconic and innovative index solutions backed by unparalleled expertise across the asset-class spectrum. By bringing transparency to the global capital markets, we empower investors everywhere to make decisions with conviction. We’re the largest global resource for index-based concepts, data and research, and home to iconic financial market indicators, such as the S&P 500 ® and the Dow Jones Industrial Average ® . More assets are invested in products based upon our indices than any other index provider in the world. With over USD 7.4 trillion in passively managed assets linked to our indices and over USD 11.3 trillion benchmarked to our indices, our solutions are widely considered indispensable in tracking market performance, evaluating portfolios and developing investment strategies. S&P Dow Jones Indices is a division of S&P Global (NYSESPGI). S&P Global is the world’s foremost provider of credit ratings, benchmarks, analytics and workflow solutions in the global capital, commodity and automotive markets. With every one of our offerings, we help many of the world’s leading organizations navigate the economic landscape so they can plan for tomorrow, today. For more information, visit www.spglobal.com/spdji . What’s In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Were more than 35,000 strong worldwide—so were able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all.From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the worlds leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Flexible DowntimeGenerous time off helps keep you energized for your time on. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIt’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email toEEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning)
Posted 3 weeks ago
3.0 - 7.0 years
6 - 10 Lacs
Hyderabad
Work from Office
About the Role: Grade Level (for internal use): 10 Experiences & Skills: Minimum 5+ years of working experience in Technology (application development and production support). 5+ years of experience in development of pipeline that extract, transform, and load data into an information product that helps the organization reach its strategic goals Minimum 5+ years of experience in developing & supporting ETLs and using Python & Spark platform Experience with Python, Spark, and Hive and Understanding of data-warehousing and data-modeling techniques Knowledge of industry-wide visualization and analytics tools (exTableau, R) Strong data engineering skills with AWS/Azure cloud platform Experience with streaming frameworks such as Kafka Knowledge of Linux, SQL, and any scripting language Experience working with any relational Databases preferably Oracle. Experience in continuous delivery through CI/CD pipelines, containers and orchestration technologies. Experience working in an Agile development environment Experience working with cross functional teams, with strong interpersonal and written communication skills Candidate must have the desire and ability to quickly understand and work within new technologies. About S&P Global Ratings At S&P Global Ratings, our analyst-driven credit ratings, research, and sustainable finance opinions provide critical insights that are essential to translating complexity into clarity so market participants can uncover opportunities and make decisions with conviction. By bringing transparency to the market through high-quality independent opinions on creditworthiness, we enable growth across a wide variety of organizations, including businesses, governments, and institutions. S&P Global Ratings is a division of S&P Global (NYSESPGI). S&P Global is the world’s foremost provider of credit ratings, benchmarks, analytics and workflow solutions in the global capital, commodity and automotive markets. With every one of our offerings, we help many of the world’s leading organizations navigate the economic landscape so they can plan for tomorrow, today.For more information, visit www.spglobal.com/ratings What’s In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Were more than 35,000 strong worldwide—so were able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all.From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the worlds leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Flexible DowntimeGenerous time off helps keep you energized for your time on. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIt’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. S&P Global has a Securities Disclosure and Trading Policy (“the Policy”) that seeks to mitigate conflicts of interest by monitoring and placing restrictions on personal securities holding and trading. The Policy is designed to promote compliance with global regulations. In some Divisions, pursuant to the Policy’s requirements, candidates at S&P Global may be asked to disclose securities holdings. Some roles may include a trading prohibition and remediation of positions when there is an effective or potential conflict of interest. Employment at S&P Global is contingent upon compliance with the Policy. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email toEEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.1 - Middle Professional Tier I (EEO Job Group)
Posted 3 weeks ago
6.0 - 8.0 years
8 - 11 Lacs
Chennai, Bengaluru
Work from Office
Skill : Azure Data Factory Notice Period : 30 Days Skills Required: 6-8 years of professional experience in data engineering or a related field. Profound expertise in SQL,T-SQL, database design, and data warehousing principles. Strong experience with Microsoft Azure tools including SQL Azure, Azure Data Factory, Azure Databricks, and Azure Data Lake. Proficient in Python, PySpark, and PySQL for data processing and analytics tasks. Experience with Power BI and other reporting and analytics tools. Demonstrated knowledge of OLAP, data warehouse design concepts, and performance optimizations in database and query processing. Excellent problem-solving, analytical, and communication skills.
Posted 3 weeks ago
0.0 - 5.0 years
10 - 20 Lacs
Mumbai
Work from Office
We are recruiting an expert application support engineer to scale up the global support capability for our data nad analytics platform used by our research and trading teams. The candidate will work closely with our data engineers, data scientists, external data vendors, and various trading teams to rapidly resolve data and analytics application issues related to data quality, data integration, model pipelines, and analtics applications. Knowledge, Skills and Abilities - Python, SQL - Familiarity with data engineering - Experience with AWS data and analytics services or similar cloud vendor services - Strong problem solving and interpersonal skills - Ablity to organise and prioritise work effectively Key Responsibilities - Incident and user management for data and analytics platform - Development and maintenance of Data Quliaty framework (including anomaly detection) - Implemenation of Python & SQL hotfixes and working with data engineers on more complex issues - Diagnostic tools implementation and automation of operational processes Key Relationships - Work closely with data scientists, data engineers, and platform engineers in a highly commercial environment - Support research analysts and traders with issue resolution Competencies - Excellent problem solving skills - Ability to communicate effectively with a diverse set of customers across business lines and technology - Report to Head of DSE Engineering Mumbai, who reports to Global Head of Cloud and Data Engineering
Posted 3 weeks ago
5.0 - 10.0 years
4 - 7 Lacs
Bengaluru
Work from Office
Were seeking a Senior Software Engineer or a Lead Software Engineer to join one of our Data Layer teams. As the name implies, the Data Layer is at the core of all things data at Zeta. Our responsibilities include: Developing and maintaining the Zeta Identity Graph platform, which collects billions of behavioural, demographic, locations and transactional signals to power people-based marketing. Ingesting vast amounts of identity and event data from our customers and partners. Facilitating data transfers across systems. Ensuring the integrity and health of our datasets. And much more. As a member of this team, the data engineer will be responsible for designing and expanding our existing data infrastructure, enabling easy access to data, supporting complex data analyses, and automating optimization workflows for business and marketing operations. Essential Responsibilities: As a Senior Software Engineer or a Lead Software Engineer, your responsibilities will include: Building, refining, tuning, and maintaining our real-time and batch data infrastructure Daily use technologies such as Spark, Airflow, Snowflake, Hive, Scylla, Django, FastAPI, etc. Maintaining data quality and accuracy across production data systems Working with Data Engineers to optimize data models and workflows Working with Data Analysts to develop ETL processes for analysis and reporting Working with Product Managers to design and build data products Working with our DevOps team to scale and optimize our data infrastructure Participate in architecture discussions, influence the road map, take ownership and responsibility over new projects Participating in on-call rotation in their respective time zones (be available by phone or email in case something goes wrong) Desired Characteristics: Minimum 5-10 years of software engineering experience. Proven long term experience and enthusiasm for distributed data processing at scale, eagerness to learn new things. Expertise in designing and architecting distributed low latency and scalable solutions in either cloud and on-premises environment. Exposure to the whole software development lifecycle from inception to production and monitoring Fluency in Python or solid experience in Scala, Java Proficient with relational databases and Advanced SQL Expert in usage of services like Spark and Hive Experience with web frameworks such as Flask, Django Experience in adequate usage of any scheduler such as Apache Airflow, Apache Luigi, Chronos etc. Experience in Kafka or any other stream message processing solutions. Experience in adequate usage of cloud services (AWS) at scale Experience in agile software development processes Excellent interpersonal and communication skills Nice to have: Experience with large scale / multi-tenant distributed systems Experience with columnar / NoSQL databases Vertica, Snowflake, HBase, Scylla, Couchbase Experience in real team streaming frameworks Flink, Storm Experience in open table formats such as Iceberg, Hudi or Deltalake
Posted 3 weeks ago
5.0 - 10.0 years
15 - 25 Lacs
Noida, Pune, Bengaluru
Hybrid
Job description Key Responsibilities: Data Pipeline Development & Optimization: Design, develop, and maintain scalable and high-performance data pipelines using PySpark and Databricks . Ensure data quality, consistency, and security throughout all pipeline stages. Optimize data workflows and pipeline performance, ensuring efficient data processing. Cloud-Based Data Solutions: Architect and implement cloud-native data solutions using AWS services (e.g., S3 , Glue , Lambda , Redshift ), GCP ( DataProc , DataFlow ), and Azure ( ADF , ADLF ). Work on ETL processes to transform, load, and process data across cloud platforms. SQL & Data Modeling: Utilize SQL (including windowing functions) to query and analyze large datasets efficiently. Work with different data schemas and models relevant to various business contexts (e.g., star/snowflake schemas, normalized, and denormalized models). Data Security & Compliance: Implement robust data security measures, ensuring encryption, access control, and compliance with industry standards and regulations. Monitor and troubleshoot data pipeline performance and security issues. Collaboration & Communication: Collaborate with cross-functional teams (data scientists, software engineers, and business stakeholders) to design and integrate end-to-end data pipelines. Communicate technical concepts clearly and effectively to non-technical stakeholders. Domain Expertise: Understand and work with domain-related data, tailoring solutions to address the specific business needs of the customer. Optimize data solutions for the business context, ensuring alignment with customer requirements and goals. Mentorship & Leadership: Provide guidance to junior team members, fostering a collaborative environment and ensuring best practices are followed. Drive innovation and promote a culture of continuous learning and improvement within the team. Required Qualifications: Experience : 6-8 years of total experience in data engineering, with 3+ years of hands-on experience in Databricks , PySpark , and AWS . 3+ years of experience in Python and SQL for data engineering tasks. Experience working with cloud ETL services such as AWS Glue , GCP DataProc/DataFlow , Azure ADF and ADLF . Technical Skills : Strong proficiency in PySpark for large-scale data processing and transformation. Expertise in SQL , including window functions, for data manipulation and querying. Experience with cloud-based ETL tools (AWS Glue, GCP DataFlow , Azure ADF ) and understanding of their integration with cloud data platforms. Deep understanding of data schemas and models used across various business contexts. Familiarity with data warehousing optimization techniques , including partitioning, indexing, and query optimization. Knowledge of data security best practices (e.g., encryption, access control, and compliance). Agile Methodologies : Experience working in Agile (Scrum or Kanban) teams for iterative development and delivery. Communication : Excellent verbal and written communication skills, with the ability to explain complex technical concepts to non-technical stakeholders. Skills Python,Databricks,Pyspark,Sql
Posted 3 weeks ago
10.0 - 14.0 years
18 - 20 Lacs
Noida
Work from Office
Position Summary This is a highly visible role that requires a perfect combination of deep technical credibility, strategic acumen and demonstrable leadership competency. You will be the ultimate Trusted Advisor, capable of engaging business and technology leaders within the worlds largest enterprises, and guiding their strategic AI-enabled journey. The Country Leader, AI Architecture, is responsible for leading the Labs Architectural services within the region. You will need to provide hands-on technical leadership, whilst managing a small team of senior AI architects and consultants. Operating in a fast-moving, highly innovative environment, collaborating with senior Sales and Technical leaders. You will have business responsibility for the provision of innovation-led Labs services. focusing on the design and implementation of advanced AI solutions enabling genuine transformational outcomes. This hands-on leadership role demands deep understanding of AI and related technologies, running in Edge, onprem and Public Cloud environments. Acting at the forefront of our industry you will be fully conversant with Generative AI, and its impact at both the individual employee and strategic organisational level. The ideal candidate will be an established thought-leader in the AI domain, with solid architectural and engineering credentials that you maintain at the highest level. Working ahead of industry trends, deeply passionate about AI-enabled business transformation and demonstrating a strong innovation-led posture. As a thought leader, you will interact frequently with CxO level clients, industry leaders, provide expert opinions, and contribute to HCLs strategic vision. Key Responsibilities Technical & Engineering Leadership Act as ultimate Design Authority for sophisticated AI solutions and related technology architecture. Lead high-level architectural discussions with clients, providing expert guidance on best practices for AI implementations across AI PC, Edge, Data Centre and Public Cloud environments. Ensure solutions align with modern best practices across the full spectrum of platforms and environments. Deep understanding across GPU/NPU, Cognitive Infrastructure, Application and Copilot/agent domains. Contribute to HCLTech thought leadership in the AI & Cloud domains with a deep understanding of open-source (e.g., Kubernetes, OPEA) and partner technologies. Collaborate on joint technical projects with global partners, including Google, Microsoft, AWS, NVIDIA, IBM, Red Hat, Intel, and Dell. Service Delivery & Innovation Design innovative AI solutions from ideation to MVP, rapidly enabling genuine business value. Optimize AI and cloud architectures to meet client requirements, balancing efficiency, accuracy and effectiveness. Assess and review existing complex solutions and recommend architectural improvements to transform applications with latest AI technologies. Drive the adoption of cutting-edge GenAI technologies spearheading initiatives that push the boundaries of AI capability across the full spectrum of environments. Thought Leadership and Client Engagement Provide expert architectural and strategy guidance to clients on incorporating Generative AI into their business and technology landscape. Conduct workshops, briefings, and strategic dialogues to educate clients on AI benefits and applications, establishing strong, trust-based relationships. Act as a trusted advisor, contributing to technical projects with a strong focus on technical excellence and on-time delivery. Author whitepapers, blogs, and speak at industry events, maintaining a visible presence as a thought leader in AI and associated technologies. Collaboration and Customer Engagement Engage with multiple customers simultaneously, building high-impact consultative relationships. Work closely with internal teams and global partners to ensure seamless collaboration and knowledge sharing across projects. Maintain hands-on technical credibility, staying ahead of industry trends and mentoring others in the organization. Management and Leadership Demonstrable track record building and managing small Architectural or Engineering teams. Support career growth and professional development of the team. Enrich and enable world-class technical excellence across the team; supported by a culture of collaboration, respect, diversity, inclusion and deep trustful relationships. Mandatory Skills & Experience Management & leadership : Demonstrable track record building and leading Architectural or Engineering teams. Proven ability to combine strategic business and commercial skills, performing at the highest-level in senior client relationships. Experience: 10+ years architecture design 10+ years software engineering. 5+ years in a senior Team Leader or similar management position. Significant client-facing engagement within a GSI, system integrator, professional services or technology organization. Technologies: Professional-level expertise in Public Cloud environments (AWS, Azure, Google Cloud). Demonstrable coding proficiency with Python, Java or Go languages. AI Expertise: Advanced machine learning algorithms, GenAI models (e.g., GPT, BERT, DALL-E, GEMINI), NLP techniques. Working familiarity with Copilot solutions, in both software engineering and office productivity domains. Business Expertise: Extensive track record performing a lead technical role in a sales, business-development or other commercial environment. Negotiating and consultative skills; experience leading the complete engagement lifecycle. Communication: Experienced public speaker, with an ability to connect with senior business leaders. Project Methodologies: Agile and Scrum project management. Desired Skills & Experience Knowledge of GenAI operations (LLMOps), experience Governing AI models in production environments. Proficiency in data engineering for AI, including data preprocessing, feature engineering, and pipeline creation. Expertise in AI model fine-tuning and evaluation, with a focus on improving performance for specialized tasks. Copilot design, engineering and extensions. Knowledgeable about Responsible AI, including governance and ethics. Bias mitigation, with experience in implementing strategies to ensure fair and unbiased AI solutions. Deep Learning Frameworks (TensorFlow, PyTorch) Innovation and Emerging Technology Trends Strategic AI Vision and Road mapping Enthusiastic about working in a fast-paced environment using the latest technologies, and passionate about HCLs dynamic and high-energy Lab culture. Verifiable Certification Recognized Professional certification from Google, Microsoft or AWS in an AI and/or Cloudrelated domain. Soft Skills and Behavioural Competencies Exemplary communication and leadership skills, capable of inspiring teams and making strategic decisions that align with business goals. Demonstrates a strong customer orientation, innovative problem-solving abilities, and effective cross-cultural collaboration. Expert at driving organizational change and fostering a culture of innovation.
Posted 3 weeks ago
4.0 - 8.0 years
6 - 10 Lacs
Pune
Work from Office
Responsibilities: designing, developing, and maintaining scalable data pipelines using Databricks, PySpark, Spark SQL, and Delta Live Tables. Collaborate with cross-functional teams to understand data requirements and translate them into efficient data models and pipelines. Implement best practices for data engineering, including data quality, and data security. Optimize and troubleshoot complex data workflows to ensure high performance and reliability. Develop and maintain documentation for data engineering processes and solutions. Requirements: Bachelor's or Master's degree. Proven experience as a Data Engineer, with a focus on Databricks, PySpark, Spark SQL, and Delta Live Tables. Strong understanding of data warehousing concepts, ETL processes, and data modelling. Proficiency in programming languages such as Python and SQL. Experience with cloud platforms (e.g., AWS, Azure, GCP) and their data services. Excellent problem-solving skills and the ability to work in a fast-paced environment. Strong leadership and communication skills, with the ability to mentor and guide team members.
Posted 3 weeks ago
6.0 - 11.0 years
8 - 13 Lacs
Hyderabad
Work from Office
GCP data engineer Big Query SQL Python Talend ETL Programmer GCP or Any Cloud technology. Job Description: Experienced in GCP data engineer Big Query SQL Python Talend ETL Programmer GCP or Any Cloud technology. Good experience in building the pipeline of GCP Components to load the data into Big Query and to cloud storage buckets. Excellent Data Analysis skills. Good written and oral communication skills Self-motivated able to work independently
Posted 3 weeks ago
5.0 - 10.0 years
0 - 0 Lacs
Chennai
Work from Office
Deep knowledge of cloud platforms (AWS, Azure, Google Cloud) including their AI-specific services such as AWS SageMaker or Google AI Platform. AI/ML Proficiency: In-depth understanding of AI/ML frameworks and libraries such as TensorFlow, PyTorch, Scikit-learn, along with experience in ML model lifecycle management. Infrastructure as Code: Proficiency in infrastructure-as-code tools such as Terraform and AWS CloudFormation to automate and manage cloud deployment processes. Regards, JayasuryaV jayasurya.v@vdartinc.com
Posted 3 weeks ago
6.0 - 10.0 years
6 - 10 Lacs
Hyderabad, Greater Noida
Work from Office
Work closely with source data application teams and product owners to design, implement and support analytics solutions that provide insights to make better decisions. Implement data migration and data engineering solutions using Azure products and services: (Azure Data Lake Storage, Azure Data Factory, Azure Functions, Event Hub, Azure Stream Analytics, Azure Databricks, etc.) and traditional data warehouse tools. Perform multiple aspects involved in the development lifecycle. Design, cloud engineering (Infrastructure, network, security, and administration), ingestion, preparation, data modeling, testing, CICD pipelines, performance tuning, deployments, consumption, BI, alerting, prod support. Provide technical leadership and collaborate within a team environment as well as work independently. Be a part of a DevOps team that completely owns and supports their product. Implement batch and streaming data pipelines using cloud technologies. Leads development of coding standards, best practices and privacy and security guidelines. Mentors' others on technical and domain skills to create multi-functional teams All you'll need for success. Minimum Qualifications: Education & Prior Job Experience 1. Bachelor's degree in Computer Science, Computer Engineering, Technology, Information Systems (CIS/MIS), Engineering or related technical discipline, or equivalent experience/training 2. 3 years software solution development using agile, DevOps, operating in a product model that includes designing, developing, and implementing large-scale applications or data engineering solutions 3. 3 years Data Engineering experience using SQL 4. 2 years of cloud development (prefer Microsoft Azure) including Azure EventHub, Azure Data Factory, Azure Databricks, Azure DevOps, Azure Blob Storage, Azure Power Apps and Power BI. 5. Combination of Development, Administration & Support experience in several of the following tools/platforms required: a. Scripting: Python, PySpark, Unix, SQL b. Data Platforms: Teradata, SQL Server c. Azure Data Explorer. Administration skills are a plus d. Azure Cloud Technologies: Top 3 Mandatory Skills and Experience: SQL, Python, PySpark
Posted 3 weeks ago
4.0 - 9.0 years
3 - 8 Lacs
Pune
Work from Office
About Client Hiring for One of the Most Prestigious Multinational Corporations Job Title : Big Data Engineer (Spark, Scala) Experience : 4 to 10 years Key Responsibilities : Data Engineering : Design, develop, and maintain large-scale distributed data processing pipelines and data solutions using Apache Spark with Scala and/or Python . Data Integration : Work on integrating various data sources (batch and real-time) from both structured and unstructured data formats into big data platforms like Hadoop , AWS EMR , or Azure HDInsight . Performance Optimization : Optimize Spark jobs for better performance, managing large datasets and ensuring efficient resource usage. Architecture Design : Participate in the design and implementation of data pipelines and data lakes that support analytics, reporting, and machine learning applications. Collaboration : Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and ensure solutions are aligned with business goals. Big Data Tools : Implement and manage big data technologies like Hadoop , Kafka , HBase , Hive , Presto , etc. Automation : Automate repetitive tasks using scripting and monitoring solutions for continuous data pipeline management. Troubleshooting : Identify and troubleshoot data pipeline issues and ensure data integrity. Cloud Platforms : Work with cloud-based services and platforms like AWS , Azure , or Google Cloud for data storage, compute, and deployment. Code Quality : Ensure high code quality by following best practices, code reviews, and implementing unit and integration tests. Technical Skills : Experience : 6-9 years of hands-on experience in Big Data Engineering with a focus on Apache Spark (preferably with Scala and/or Python ). Languages : Proficiency in Scala and/or Python for building scalable data processing applications. Knowledge of Java is a plus. Big Data Frameworks : Strong experience with Apache Spark , Hadoop , Hive , HBase , Kafka , and other big data tools. Data Processing : Strong understanding of batch and real-time data processing and workflows. Cloud Experience : Proficient in cloud platforms such as AWS , Azure , or Google Cloud Platform for deploying and managing big data solutions. SQL/NoSQL : Experience working with SQL and NoSQL databases, particularly Hive , HBase , or Cassandra . Data Integration : Strong skills in integrating and processing diverse data sources, including working with data lakes and data warehouses. Performance Tuning : Hands-on experience in performance tuning and optimization of Spark jobs and jobs running on Hadoop clusters. Data Pipelines : Strong background in designing, building, and maintaining robust data pipelines for large-scale data processing. Version Control : Familiarity with Git or other version control systems. DevOps & Automation : Knowledge of automation tools and CI/CD pipelines for data workflows (Jenkins, Docker, Kubernetes). Analytical Skills : Strong problem-solving skills and a deep understanding of data modeling, data structures, and algorithms. Notice period : Immediate joiners Location : Pune Mode of Work :WFO(Work From Office) Thanks & Regards, SWETHA Black and White Business Solutions Pvt.Ltd. Bangalore,Karnataka,INDIA. Contact Number:8067432433 rathy@blackwhite.in |www.blackwhite.in
Posted 3 weeks ago
10.0 - 15.0 years
12 - 17 Lacs
Chennai
Work from Office
Job Purpose: We are looking for a Senior Data Engineer with extensive experience in developing ETL processes using PySpark Notebooks and Microsoft Fabric, and supporting existing legacy SQL Server environments. The ideal candidate will possess a strong background in Spark-based development, demonstrate a high proficiency in SQL, and be comfortable working independently, collaboratively within a team, or leading other developers when required, coupled with strong communication skills. Requirements: We are looking for a Senior Data Engineer with extensive experience in developing ETL processes using PySpark Notebooks and Microsoft Fabric, and supporting existing legacy SQL Server environments. The ideal candidate will possess a strong background in Spark-based development, demonstrate a high proficiency in SQL, and be comfortable working independently, collaboratively within a team, or leading other developers when required, coupled with strong communication skills. The ideal candidate will possess Experience with Azure Data Services, including Azure Data Factory, Azure Synapse or similar tools,Experience of creating DAG's, implementing activities, and running Apache Airflow and Familiarity with DevOps practices, CI/CD pipelines and Azure DevOps. The ideal candidate should have: Key Responsibilities: Design, develop, and maintain ETL Notebook orchestration pipelines using PySpark and Microsoft Fabric. Working with Apache Delta Lake tables, Change Data Feed (CDF), Lakehouses and custom libraries Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver efficient data solutions. Migrate and integrate data from legacy SQL Server environments into modern data platforms. Optimize data pipelines and workflows for scalability, efficiency, and reliability. Provide technical leadership and mentorship to junior developers and other team members. Troubleshoot and resolve complex data engineering issues related to performance, data quality, and system scalability. Debugging of code, breaking down to test components, identify issues and resolve Develop, maintain, and enforce data engineering best practices, coding standards, and documentation. Conduct code reviews and provide constructive feedback to improve team productivity and code quality. Support data-driven decision-making processes by ensuring data integrity, availability, and consistency across different platforms. Qualifications : Bachelor s or Master s degree in Computer Science, Data Science, Engineering, or a related field. 10+ years of experience in data engineering, with a strong focus on ETL development using PySpark or other Spark-based tools. Proficiency in SQL with extensive experience in complex queries, performance tuning, and data modeling. Experience with Microsoft Fabric or similar cloud-based data integration platforms is a plus. Strong knowledge of data warehousing concepts, ETL frameworks, and big data processing. Familiarity with other data processing technologies (e.g., Hadoop, Hive, Kafka) is an advantage. Experience working with both structured and unstructured data sources. Excellent problem-solving skills and the ability to troubleshoot complex data engineering issues. Experience with Azure Data Services, including Azure Data Factory, Azure Synapse, or similar tools. Experience of creating DAG's, implementing activities, and running Apache Airflow Familiarity with DevOps practices, CI/CD pipelines and Azure DevOps.
Posted 3 weeks ago
6.0 - 10.0 years
7 - 11 Lacs
Greater Noida
Work from Office
Work closely with source data application teams and product owners to design, implement and support analytics solutions that provide insights to make better decisions. Implement data migration and data engineering solutions using Azure products and services: (Azure Data Lake Storage, Azure Data Factory, Azure Functions, Event Hub, Azure Stream Analytics, Azure Databricks, etc.) and traditional data warehouse tools. Perform multiple aspects involved in the development lifecycle. Design, cloud engineering (Infrastructure, network, security, and administration), ingestion, preparation, data modeling, testing, CICD pipelines, performance tuning, deployments, consumption, BI, alerting, prod support. Provide technical leadership and collaborate within a team environment as well as work independently. Be a part of a DevOps team that completely owns and supports their product. Implement batch and streaming data pipelines using cloud technologies. Leads development of coding standards, best practices and privacy and security guidelines. Mentors' others on technical and domain skills to create multi-functional teams All you'll need for success. Minimum Qualifications: Education & Prior Job Experience 1. Bachelor's degree in Computer Science, Computer Engineering, Technology, Information Systems (CIS/MIS), Engineering or related technical discipline, or equivalent experience/training 2. 3 years software solution development using agile, DevOps, operating in a product model that includes designing, developing, and implementing large-scale applications or data engineering solutions 3. 3 years Data Engineering experience using SQL 4. 2 years of cloud development (prefer Microsoft Azure) including Azure EventHub, Azure Data Factory, Azure Databricks, Azure DevOps, Azure Blob Storage, Azure Power Apps and Power BI. 5. Combination of Development, Administration & Support experience in several of the following tools/platforms required: a. Scripting: Python, PySpark, Unix, SQL b. Data Platforms: Teradata, SQL Server c. Azure Data Explorer. Administration skills are a plus d. Azure Cloud Technologies: Top 3 Mandatory Skills and Experience: SQL, Python, PySpark
Posted 3 weeks ago
9.0 - 14.0 years
55 - 60 Lacs
Bengaluru
Hybrid
Dodge Position Title: Technology Lead STG Labs Position Title: Location: Bangalore, India About Dodge Dodge Construction Network exists to deliver the comprehensive data and connections the construction industry needs to build thriving communities. Our legacy is deeply rooted in empowering our customers with transformative insights, igniting their journey towards unparalleled business expansion and success. We serve decision-makers who seek reliable growth and who value relationships built on trust and quality. By combining our proprietary data with cutting-edge software, we deliver to our customers the essential intelligence needed to excel within their respective landscapes. We propel the construction industry forward by transforming data into tangible guidance, driving unparalleled advancement. Dodge is the catalyst for modern construction. https://www.construction.com/ About Symphony Technology Group (STG) STG is a Silicon Valley (California) based private equity firm that has a long and successful track record of transforming high potential software and software-enabled services companies, as well as insights-oriented companies into definitive market leaders. The firm brings expertise, flexibility, and resources to build strategic value and unlock the potential of innovative companies. Partnering to build customer-centric, market winning portfolio companies, STG creates sustainable foundations for growth that bring value to all existing and future stakeholders. The firm is dedicated to transforming and building outstanding technology companies in partnership with world class management teams. With over $5.0 billion in assets under management, including a recently raised $2.0 billion fund. STGs expansive portfolio has consisted of more than 30 global companies. STG Labs is the incubation center for many of STG’s portfolio companies, building their engineering, professional services, and support delivery teams in India. STG Labs offers an entrepreneurial start-up environment for software and AI engineers, data scientists and analysts, project and product managers and provides a unique opportunity to work directly for a software or technology company. Based in Bangalore, STG Labs supports hybrid working. https://stg.com Roles and Responsibilities Lead the design, deployment, and management of data mart and analytics infrastructure leveraging AWS services Implement and manage CI/CD pipelines using industry-leading DevOps practices and tools Design, implement, and oversee API architecture, ensuring robust, scalable, and secure REST API development using AWS API Gateway Collaborate closely with data engineers, architects, and analysts to design highly performant and scalable data solutions. Mentor and guide engineering teams, fostering a culture of continuous learning and improvement. Optimize cloud resources for cost-efficiency, scalability, and reliability. Establish best practices and standards for AWS infrastructure, DevOps processes, API design, and data analytics workflows. Qualifications Hands-on working knowledge and experience is required in: Data Structures Memory Management Basic Algos (Search, Sort, etc) AWS Data Services: Redshift, Glue, EMR, Athena, Lake Formation, Lambda Infrastructure-as-Code Tools: Terraform, AWS CloudFormation Scripting Languages: Python, Bash, SQL DevOps Tooling: Docker, Kubernetes, Jenkins, Bitbucket - must be comfortable in CLI / terminal environments. Command Line / Terminal Environments AWS Security Best Practices Scalable Data Marts, Analytics Systems, and RESTful APIs Hands-on working knowledge and experience is preferred in: Container Orchestration: Kubernetes, EKS Data Visualization & Warehousing: Tableau, Data Warehouse Machine Learning & Big Data Pipelines Certifications Preferred : AWS Certifications (Solutions Architect Professional, DevOps Engineer) (Preferred Skill).
Posted 3 weeks ago
3.0 - 7.0 years
5 - 9 Lacs
Pune
Work from Office
We are looking for a Senior Data Platform Engineer to lead the design, development, and optimization of our data platform infrastructure. In this role, you will drive scalability, reliability, and performance across our data systems, working closely with data engineers, analysts, and product teams to enable data-driven decision-making at scale. Required Skills & Experience: Architect and implement scalable, secure, and high-performance data platforms (on AWS cloud using Databbricks). Build and manage data pipelines and ETL processes using modern data engineering tools (AWS RDS, REST APIs and, S3 based ingestions ) Monitor the Maintain the production data pipelines, work on enhancements Optimize data systems for performance, reliability, and cost efficiency. Implement data governance, quality, and observability best practices as per Freshworks standards Collaborate with cross-functional teams to support data needs. Qualifications: 1. Bachelor's/Masters degree in Computer Science, Information Technology, or related field. 2. Good exposure to Data structures and algorithms 3. Proven backend development experience using Scala, Spark or Python 4. Strong understanding of REST API development, web services, and microservices architecture. 5. Good to have experience with Kubernetes and containerized deployment. 6. Proficient in working with relational databases like MySQL, PostgreSQL, or similar platforms. 7. Solid understanding and hands-on experience with AWS cloud services. 8. Strong knowledge of code versioning tools, such as Git, Jenkins 9. Excellent problem-solving skills, critical thinking, and a keen attention to detail.
Posted 3 weeks ago
5.0 - 10.0 years
9 - 14 Lacs
Hyderabad
Work from Office
Roles and Responsibilities Lead the design, development, and maintenance of data pipelines and ETL processes architect and implement scalable data solutions using Databricks and AWS. Optimize data storage and retrieval systems using Rockset, Clickhouse, and CrateDB. Develop and maintain data APIs using FastAPI. Orchestrate and automate data workflows using Airflow. Collaborate with data scientists and analysts to support their data needs. Ensure data quality, security, and compliance across all data systems. Mentor junior data engineers and promote best practices in data engineering. Evaluate and implement new data technologies to improve the data infrastructure. Participate in cross-functional projects and provide technical leadership. Manage and optimize data storage solutions using AWS S3, implementing best practices for data lakes and data warehouses. Private and Confidential www.fissionlabs.com info@fissionlabs.com Implement and manage Databricks Unity Catalog for centralized data governance and access control across the organization. Qualifications Required Bachelor's or Master's degree in Computer Science, Engineering, or related field 5+ years of experience in data engineering, with at least 2-3 years in a lead role Strong proficiency in Python, PySpark, and SQL Extensive experience with Databricks and AWS cloud services Hands-on experience with Airflow for workflow orchestration Familiarity with FastAPI for building high-performance APIs Experience with columnar databases like Rockset, Clickhouse, and CrateDB Solid understanding of data modeling, data warehousing, and ETL processes Experience with version control systems (e.g., Git) and CI/CD pipelines Excellent problem-solving skills and ability to work in a fast-paced environment Strong communication skills and ability to work effectively in cross-functional teams Knowledge of data governance, security, and compliance best practices Proficiency in designing and implementing data lake architectures using AWS S3 Experience with Databricks Unity Catalog or similar data governance and metadata management tools Skills and Experience Required Tech Stack Databricks, Python, PySpark, SQL, Airflow, FastAPI, AWS (S3, IAM, ECR, Lambda), Rockset, Clickhouse, CrateDB Why you'll love working with us: Opportunity to work on business challenges from top global clientele with high impact. Vast opportunities for self-development, including online university access and sponsored certifications. Sponsored Tech Talks, industry events & seminars to foster innovation and learning. Generous benefits package including health insurance, retirement benefits, flexible work hours, and more. Supportive work environment with forums to explore passions beyond work. This role presents an exciting opportunity for a motivated individual to contribute to the development of cutting-edge solutions while advancing their career in a dynamic and collaborative environment.
Posted 3 weeks ago
5.0 - 10.0 years
10 - 15 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Role & responsibilities Data Bricks skillset with Pyspark , SQL Strong proficiency in pyspark and SQL Understanding of data warehousing concepts ETL processes/ Data pipeline building with ADB/ADF Experience with Azure cloud platform, knowledge of data manipulation techniques Experience working with business teams to convert the requirements into technical stories for migration Leading the technical discussions and implementing the solution Experience will multi tenant architecture and have delivered projects in Databricks + Azure combination Experience to Unity catalogue is useful
Posted 3 weeks ago
1.0 - 4.0 years
10 - 14 Lacs
Pune
Work from Office
Overview Design, develop, and maintain data pipelines and ETL/ELT processes using PySpark/Databricks/bigquery/Airflow/composer. Optimize performance for large datasets through techniques such as partitioning, indexing, and Spark optimization. Collaborate with cross-functional teams to resolve technical issues and gather requirements. Responsibilities Ensure data quality and integrity through data validation and cleansing processes. Analyze existing SQL queries, functions, and stored procedures for performance improvements. Develop database routines like procedures, functions, and views/MV. Participate in data migration projects and understand technologies like Delta Lake/warehouse/bigquery. Debug and solve complex problems in data pipelines and processes. Qualifications Bachelor’s degree in computer science, Engineering, or a related field. Strong understanding of distributed data processing platforms like Databricks and BigQuery. Proficiency in Python, PySpark, and SQL programming languages. Experience with performance optimization for large datasets. Strong debugging and problem-solving skills. Fundamental knowledge of cloud services, preferably Azure or GCP. Excellent communication and teamwork skills. Nice to Have: Experience in data migration projects. Understanding of technologies like Delta Lake/warehouse. What we offer you Transparent compensation schemes and comprehensive employee benefits, tailored to your location, ensuring your financial security, health, and overall wellbeing. Flexible working arrangements, advanced technology, and collaborative workspaces. A culture of high performance and innovation where we experiment with new ideas and take responsibility for achieving results. A global network of talented colleagues, who inspire, support, and share their expertise to innovate and deliver for our clients. Global Orientation program to kickstart your journey, followed by access to our Learning@MSCI platform, LinkedIn Learning Pro and tailored learning opportunities for ongoing skills development. Multi-directional career paths that offer professional growth and development through new challenges, internal mobility and expanded roles. We actively nurture an environment that builds a sense of inclusion belonging and connection, including eight Employee Resource Groups. All Abilities, Asian Support Network, Black Leadership Network, Climate Action Network, Hola! MSCI, Pride & Allies, Women in Tech, and Women’s Leadership Forum. At MSCI we are passionate about what we do, and we are inspired by our purpose – to power better investment decisions. You’ll be part of an industry-leading network of creative, curious, and entrepreneurial pioneers. This is a space where you can challenge yourself, set new standards and perform beyond expectations for yourself, our clients, and our industry. MSCI is a leading provider of critical decision support tools and services for the global investment community. With over 50 years of expertise in research, data, and technology, we power better investment decisions by enabling clients to understand and analyze key drivers of risk and return and confidently build more effective portfolios. We create industry-leading research-enhanced solutions that clients use to gain insight into and improve transparency across the investment process. MSCI Inc. is an equal opportunity employer. It is the policy of the firm to ensure equal employment opportunity without discrimination or harassment on the basis of race, color, religion, creed, age, sex, gender, gender identity, sexual orientation, national origin, citizenship, disability, marital and civil partnership/union status, pregnancy (including unlawful discrimination on the basis of a legally protected parental leave), veteran status, or any other characteristic protected by law. MSCI is also committed to working with and providing reasonable accommodations to individuals with disabilities. If you are an individual with a disability and would like to request a reasonable accommodation for any part of the application process, please email Disability.Assistance@msci.com and indicate the specifics of the assistance needed. Please note, this e-mail is intended only for individuals who are requesting a reasonable workplace accommodation; it is not intended for other inquiries. To all recruitment agencies MSCI does not accept unsolicited CVs/Resumes. Please do not forward CVs/Resumes to any MSCI employee, location, or website. MSCI is not responsible for any fees related to unsolicited CVs/Resumes. Note on recruitment scams We are aware of recruitment scams where fraudsters impersonating MSCI personnel may try and elicit personal information from job seekers. Read our full note on careers.msci.com
Posted 3 weeks ago
8.0 - 12.0 years
35 - 40 Lacs
Mumbai
Work from Office
JOBOVERVIEW: Part of Business Focused IT, candidate needs would be in charge of scaling up and managing an Enterprise-Wide Data Platform that would support the analytical needs of the complete Pharma business(extensible to other businesses as required). The platform should be flexible to support business operations of the future and provide story telling type of intuitive analytics. This position would be a part of the Analytics Center of Excellence. Essential Skills & Experience: BS/MS degree in computer science, mathematics, or equivalent relevant degree with 8+ years in Analytics, BI and Data Warehousing. Experience leading in a highly cross-functional environment, collaborating closely with IT, Finance & Engineering. Hands-on experience in architecting and building scalable data platform, ETL processes, distributed systems for data processing, data migration and quality. Strong familiarity and working knowledge with cloud platforms like AWS and Snowflake. Experience in building compelling data visualizations using business intelligence and data visualization tools like Tableau, BI and Qlik. Ability to develop & execute data strategy in collaboration with business leads. Excellent problem-solving, with ability to translate complex data into business recommendations for business stakeholders Excellent communication skills, with ability to explain complex and abstract technological concepts to business stakeholders. Proficiency in SQL for extraction, aggregating and processing large volume of structured/unstructured data. Experience in advanced query optimization techniques. Proficiency in data acquisition and data preparation by pulling data from various sources. Self-driven and ability to learn new, unfamiliar tools and deliver on ambiguous projects with incomplete data. Experience reviewing and providing feedback on architecture and code review. KEY ROLES/RESPONSIBILITIES: Responsible for developing and maintaining the global data marketplace (data lake) Manages the sourcing and acquisition of internal (including IT and OT) & external data sets Ensure adherence of data to both enterprise business rules, and, especially, to legal and regulatory requirements Define the data quality standards for cross functional data that is used in BI/analytics models/reports Provide input into data integration standards and the enterprise data architecture Responsible for modelling and designing the application data structure, storage and integration and leading the database analysis, design and build effort Review the database deliverables throughout development thereby ensuring quality and traceability to requirements and adherence to all quality management plans and standards Develop strategies for data acquisitions, dissemination and archival Manage the data architecture within the big data solution such as Hadoop, Cloudera, etc.. Responsible for modelling and designing the big data structure, storage, integration and leading the database analysis, design, visualization and build effort Review the database deliverables throughout development thereby ensuring quality and traceability to requirements and adherence to all quality management plans and standards Work with partners and vendors (in a multi-vendor environment) for various capabilities Continuously review the analytics stack for improvements performance improvements, reduce overall TCO through cost optimizations & better the predictive capabilities Bring in thought leadership with regards to analytics to make Piramal Pharma an analytics driven business; and help in driving business KPIs Preparation of Analytics Platform budgets for both CAPEX and OPEX for assigned initiatives and rolling out the initiatives within budget & projected timelines Drive MDM Strategy and Implementation initiative Responsible for overall delivery and customer satisfaction for Business services, interaction with business leads, project status management and reporting, implementation management, identifying further opportunities for automation within PPL Ensure IT compliance in all project rollouts as per regulatory guidelines Conduct Change Management and Impact Analysis for approved enhancements. Uphold data integrity requirements following ALCOA+ guidelines. Monitor SLAs and KPIs as agreed upon by the business, offering root-cause analysis and risk mitigation action plans when needed. Drive awareness & learning across Piramal Pharma in Enterprise Data Platform
Posted 3 weeks ago
4.0 - 7.0 years
10 - 20 Lacs
Noida, Hyderabad, Pune
Work from Office
Streaming data Technical skills requirements :- Experience- 5+ Years Solid hands-on and Solution Architecting experience in Big-Data Technologies (AWS preferred) Skills Required- - Hands on experience in: AWS Dynamo DB, EKS, Kafka, Kinesis, Glue, EMR- Hands-on experience of programming language like Scala with Spark. - Good command and working experience on Hadoop Map Reduce, HDFS, Hive, HBase, and/or No-SQL Databases - Hands on working experience on any of the data engineering analytics platform (Hortonworks Cloudera MapR AWS), AWS preferred - Hands-on experience on Data Ingestion Apache Nifi, Apache Airflow, Sqoop, and Oozie - Hands on working experience of data processing at scale with event driven systems, message queues (Kafka Flink Spark Streaming) - Data Warehouse exposure on Apache Nifi, Apache Airflow, Kylo - Operationalization of ML models on AWS (e.g. deployment, scheduling, model monitoring etc.) - Feature Engineering Data Processing to be used for Model development- Experience gathering and processing raw data at scale (including writing scripts, web scraping, calling APIs, write SQL queries, etc.) - Experience building data pipelines for structured unstructured, real-time batch, events synchronous asynchronous using MQ, Kafka, Steam processing - Hands-on working experience in analysing source system data and data flows, working with structured and unstructured data - Must be very strong in writing SQL queries.
Posted 3 weeks ago
5.0 - 10.0 years
10 - 20 Lacs
Hyderabad, Bengaluru
Hybrid
Job Description Postgres Developer The Postgres developer will be responsible for developing in Postgres platform hosted in Azure. Good data Engineering, data modeling, SQL knowledge is a must with Postgres programming background. The developerwill be responsible for providing design and development solutions for applications in the Postgres(EDB). Essential Job Functions: Understand requirements and engage with team to design and deliver projects. Design and implement Postgres EDB projects in CMS Design and develop application lifecycle utilizing EDB Postgres / Azure technologies Participate in design and planning and necessary documentation Participate in Agile ceremonies including daily standups, scrum, retrospectives, demos, code reviews. Hands on with PSQL/SQL development and Unix scripting Engage with team to develop and deliver cross functional products Key Skills Data Engineering, SQL and ETL Unix scripting Postgres DBMS Data transfer methodologies CICD Strong communication Other Responsibilities: Document and maintain project artifacts. Maintain comprehensive knowledge of industry standards, methodologies, processes, and best practices. Complete training as required for Privacy, Code of Conduct etc. Promptly report any known or suspected loss, theft or unauthorized disclosure or use of PI to the General Counsel/Chief Compliance Officer or Chief Information Officer. Adhere to the company's compliance program. Safeguard the company's intellectual property, information, and assets. Other duties as assigned. Minimum Qualifications and Job Requirements: Bachelor's degree in CS. 7 years of hands-on experience in designing and developing DB solutions 5 years of hands-on experience in Oracle or Postgres DBMS 5 years of hands-on experience in Unix scripting, SQL, Object oriented programming, ETL and unit testing Experience with Azure DevOps and CI/CD as well as agile tools and processes including JIRA, confluence.
Posted 3 weeks ago
5.0 - 7.0 years
15 - 25 Lacs
Pune, Bengaluru
Work from Office
Job Role & responsibilities: - Responsible for architecture designing, building and deploying data systems, pipelines etc Responsible for Designing and implementing agile, scalable, and cost efficiency solution on cloud data services. Responsible for Designing, Implementation, Development & Migration Migrate data from traditional database systems to Cloud environment Architect and implement ETL and data movement solutions. Technical Skill, Qualification & experience required:- 4.5-7 years of experience in Data Engineering, Azure Cloud Data Engineering, Azure Databricks, datafactory , Pyspark, SQL,Python Hands on experience in Azure Databricks, Data factory, Pyspark, SQL Proficient in Cloud Services-Azure Strong hands-on experience for working with Streaming dataset Hands-on Expertise in Data Refinement using Pyspark and Spark SQL Familiarity with building dataset using Scala. Familiarity with tools such as Jira and GitHub Experience leading agile scrum, sprint planning and review sessions Good communication and interpersonal skills Comfortable working in a multidisciplinary team within a fast-paced environment * Immediate Joiners will be preferred only
Posted 3 weeks ago
8.0 - 10.0 years
25 - 35 Lacs
Bengaluru
Work from Office
Job Title: Data Engineer (Java + Hadoop/Spark) Location: Bangalore Type: Full Time Experience: 8-12 years Notice Period Immediate Joiners to 30 Days Job Description: We are looking for a skilled Data Engineer with strong expertise in Java and hands-on experience with Hadoop or Spark. The ideal candidate will be responsible for designing, building, and maintaining scalable data pipelines and processing systems. Key Responsibilities: Develop and maintain data pipelines using Java. Work with big data technologies such as Hadoop or Spark to process large datasets. Optimize data workflows and ensure high performance and reliability. Collaborate with data scientists, analysts, and other engineers on data-related initiatives. Requirements: Strong programming skills in Java. Hands-on experience with Hadoop or Spark. Experience with data ingestion, transformation, and storage solutions. Familiarity with distributed systems and big data architecture.
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20183 Jobs | Dublin
Wipro
10025 Jobs | Bengaluru
EY
8024 Jobs | London
Accenture in India
6531 Jobs | Dublin 2
Amazon
6260 Jobs | Seattle,WA
Uplers
6244 Jobs | Ahmedabad
Oracle
5916 Jobs | Redwood City
IBM
5765 Jobs | Armonk
Capgemini
3771 Jobs | Paris,France
Tata Consultancy Services
3728 Jobs | Thane