Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
10.0 years
6 - 9 Lacs
Ahmedabad
On-site
About the Role: Grade Level (for internal use): 11 S&P Global EDO The Role: Lead- Software Engineering IT- Application Development. Join Our Team: Step into a dynamic team at the cutting edge of data innovation! You’ll collaborate daily with talented professionals from around the world, designing and developing next-generation data products for our clients. Our team thrives on a diverse toolkit that evolves with emerging technologies, offering you the chance to work in a vibrant, global environment that fosters creativity and teamwork. The Impact: As a Lead Software Developer at S&P Global, you’ll be a driving force in shaping the future of our data products. Your expertise will streamline software development and deployment, aligning cutting-edge solutions with business needs. By ensuring seamless integration and continuous delivery, you’ll enhance product capabilities, delivering high-quality systems that meet the highest standards of availability, security, and performance. Your work will empower our clients with impactful, data-driven solutions, making a real difference in the financial world. What’s in it for You: Career Development: Build a rewarding career with a global leader in financial information and analytics, supported by continuous learning and a clear path to advancement. Dynamic Work Environment: Thrive in a fast-paced, forward-thinking setting where your ideas fuel innovation and your contributions shape groundbreaking solutions. Skill Enhancement: Elevate your expertise on an enterprise-level platform, mastering the latest tools and techniques in software development. Versatile Experience: Dive into full-stack development with hands-on exposure to cloud computing, Bigdata, and revolutionary GenAI technologies. Leadership Opportunities: Guide and inspire a skilled team, steering the direction of our products and leaving your mark on the future of technology at S&P Global. Responsibilities: Architect and develop scalable Bigdata and cloud applications, harnessing a range of cloud services to create robust, high-performing solutions. Design and implement advanced CI/CD pipelines, automating software delivery for fast, reliable deployments that keep us ahead of the curve. Tackle complex challenges head-on, troubleshooting and resolving issues to ensure our products run flawlessly for clients. Lead by example, providing technical guidance and mentoring to your team, driving innovation and embracing new processes. Deliver top-tier code and detailed system design documents, setting the standard with technical walkthroughs that inspire excellence. Bridge the gap between technical and non-technical stakeholders, turning complex requirements into elegant, actionable solutions. Mentor junior developers, nurturing their growth and helping them build skills and careers under your leadership. What We’re Looking For: We’re seeking a passionate, experienced professional with: 10-13 years of hands-on experience designing and building data-intensive solutions using distributed computing, showcasing your mastery of scalable architectures. Proven success implementing and maintaining enterprise search solutions in large-scale environments, ensuring peak performance and reliability. A history of partnering with business stakeholders and users to shape research directions and craft robust, maintainable products. Extensive experience deploying data engineering solutions in public clouds like AWS, GCP, or Azure, leveraging cloud power to its fullest. Advanced programming skills in Python, Java, .NET or Scala, backed by a portfolio of impressive projects. Strong knowledge of Gen AI tools (e.g., GitHub Copilot, ChatGPT, Claude, or Gemini) and their power to boost developer productivity. Expertise in containerization, scripting, cloud platforms, and CI/CD practices, ready to shine in a modern development ecosystem. 5+ years working with Python, Java, .NET, Kubernetes, and data/workflow orchestration tools, proving your technical versatility. Deep experience with SQL, NoSQL, Apache Spark, Airflow, or similar tools, operationalizing data-driven pipelines for large-scale batch and stream processing. A knack for rapid prototyping and iteration, delivering high-quality solutions under tight deadlines. Outstanding communication and documentation skills, adept at explaining complex ideas to technical and non-technical audiences alike. Take the Next Step: Ready to elevate your career and make a lasting impact in data and technology? Join us at S&P Global and help shape the future of financial information and analytics. Apply today! Return to Work Have you taken time out for caring responsibilities and are now looking to return to work? As part of our Return-to-Work initiative (link to career site page when available), we are encouraging enthusiastic and talented returners to apply and will actively support your return to the workplace. About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence . What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.2 - Middle Professional Tier II (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 316185 Posted On: 2025-06-06 Location: Hyderabad, Telangana, India
Posted 1 week ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Hello Folks! We’re Hiring – Senior Data Engineer (GCP) || Hyderabad || Chennai || Bengaluru We are hiring for a product-based company for permanent roles! Join our innovative team and contribute to cutting-edge solutions. Job Title: Senior Data Engineer // principal Data engineer (gcp data engineer) What you’ll be doing… We are looking for data engineers who can work with world class team members to help drive telecom business to its full potential. We are building data products / assets for telecom wireless and wireline business which includes consumer analytics, telecom network performance and service assurance analytics etc. We are working on cutting edge technologies like digital twin to build these analytical platforms and provide data support for varied AI ML implementations. As a data engineer you will be collaborating with business product owners, coaches, industry renowned data scientists and system architects to develop strategic data solutions from sources which includes batch, file and data streams As a Data Engineer with ETL/ELT expertise for our growing data platform & analytics teams, you will understand and enable the required data sets from different sources both structured and unstructured data into our data warehouse and data lake with real-time streaming and/or batch processing to generate insights and perform analytics for business teams within Company. Understanding the business requirements and converting them to technical design. Working on Data Ingestion, Preparation and Transformation. Developing data streaming applications. Debugging the production failures and identifying the solution. Working on ETL/ELT development. Understanding devops process and contributing for devops pipelines What we’re looking for... You’re curious about new technologies and the game-changing possibilities it creates. You like to stay up-to-date with the latest trends and apply your technical expertise to solving business problems. You’ll need to have… Bachelor’s degree or four or more years of work experience. Four or more years of work experience. Experience with Data Warehouse concepts and Data Management life cycle. Experience in any DBMS Experience in Shell scripting, Spark, Scala. Experience in GCP/BigQuery, composer, Airflow. Experience in real time streaming Experience :4+years (mandatory) Location: Hyderabad, Bengaluru, chennai Work Mode: Hydrid Notice period: 60 days max In case intrested please share ur cv to Ramanjaneya.m@technogenindia.com Show more Show less
Posted 1 week ago
30.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Position Overview ABOUT APOLLO Apollo is a high-growth, global alternative asset manager. In our asset management business, we seek to provide our clients excess return at every point along the risk-reward spectrum from investment grade to private equity with a focus on three investing strategies: yield, hybrid, and equity. For more than three decades, our investing expertise across our fully integrated platform has served the financial return needs of our clients and provided businesses with innovative capital solutions for growth. Through Athene, our retirement services business, we specialize in helping clients achieve financial security by providing a suite of retirement savings products and acting as a solutions provider to institutions. Our patient, creative, and knowledgeable approach to investing aligns our clients, businesses we invest in, our employees, and the communities we impact, to expand opportunity and achieve positive outcomes. OUR PURPOSE AND CORE VALUES Our Clients Rely On Our Investment Acumen To Help Secure Their Future. We Must Never Lose Our Focus And Determination To Be The Best Investors And Most Trusted Partners On Their Behalf. We Strive To Be The leading provider of retirement income solutions to institutions, companies, and individuals. The leading provider of capital solutions to companies. Our breadth and scale enable us to deliver capital for even the largest projects – and our small firm mindset ensures we will be a thoughtful and dedicated partner to these organizations. We are committed to helping them build stronger businesses. A leading contributor to addressing some of the biggest issues facing the world today – such as energy transition, accelerating the adoption of new technologies, and social impact – where innovative approaches to investing can make a positive difference. We are building a unique firm of extraordinary colleagues who: Outperform expectations Challenge Convention Champion Opportunity Lead responsibly Drive collaboration As One Apollo team, we believe that doing great work and having fun go hand in hand, and we are proud of what we can achieve together. Our Benefits Apollo relies on its people to keep it a leader in alternative investment management, and the firm’s benefit programs are crafted to offer meaningful coverage for both you and your family. Please reach out to your Human Capital Business Partner for more detailed information on specific benefits. Position Overview At Apollo, we’re a global team of alternative investment managers passionate about delivering uncommon value to our investors and shareholders. With over 30 years of proven expertise across Private Equity, Credit and Real Estate, regions and industries, we’re known for our integrated businesses, our strong investment performance, our value-oriented philosophy – and our people. The Client and Innovation Engineering team is responsible to design and deliver digital products to our institutional and wealth management clients and sales team. We are a product driven, and developer focused team; our goal to simplify our engineering process and meet our business objectives. We look for creative collaborators who evolve, adapt to change, and thrive in a fast-paced global environment. Primary Responsibilities Apollo is seeking a hands-on, business-oriented Lead Data Engineer to lead the technology efforts focused on supporting data driven distribution processes. The ideal candidate will bring strong experience in data engineering within asset and/or wealth management, combined with excellent technical acumen and a passion for building scalable, secure, and high-performance data solutions. This role will partner closely with Distribution Data Enablement, Sales & Marketing, Operations, and Finance teams to execute key initiatives aligned with Apollo’s target operating model. You will play a critical role in building and evolving our data products and infrastructure. You will learn new technologies, working on constantly upgrading your skill set and the products you work on to be at-par with the best in the industry. You will innovate and solve technical challenges that emphasizes a long-term vision. Design, build, and maintain scalable and efficient cloud-based data pipelines and integration workflows using Azure Data Factory (ADF), DBT, Snowflake, FiveTran, and related tools. Collaborate closely with business stakeholders to understand data needs and translate them into effective technical solutions, including developing relational and dimensional data models. Implement and optimize end-to-end ETL/ELT processes to support enterprise data needs. Design and implement pipeline controls, conduct data quality assessments and enforce data governance best practices to ensure accuracy and integrity. Monitor, troubleshoot, and resolve issues across data pipelines to ensure stability, reliability, and performance. Partner with cross-functional teams to support analytics, reporting, and operational data needs. Stay current with industry trends and emerging technologies to continuously improve our data architecture. Support master data management (MDM) initiatives and contribute to overall data strategy and architecture. Qualifications & Experience 8+ years of professional experience in data engineering or a related field, ideally within financial services or asset/wealth management. Proven expertise in Azure-based data engineering tools including ADF, DBT, Snowflake, and FiveTran. Programming skills in Python (or Scala/Java) for data transformation and automation. Solid understanding of modern data modeling (relational, dimensional, and star schema). Experience with MDM platforms and frameworks is highly desirable. Familiarity with additional ETL/ELT tools (e.g., Talend, Informatica, SSIS) is a plus. Comfortable working in a fast-paced, agile environment with rapidly changing priorities. Strong communication skills, with the ability to translate complex technical topics into business-friendly language. A degree in Computer Science, Engineering, or a related field is preferred. A strong analytical mindset with a passion for solving complex problems. A team player who is proactive, accountable, and detail oriented. A leader who sets high standards and delivers high-quality outcomes. An innovator who keeps up with industry trends and continually seeks opportunities to improve. Show more Show less
Posted 1 week ago
3.0 - 7.0 years
5 - 9 Lacs
Pune, Gurugram
Work from Office
What will your job look like? Work with business teams and data analysts to understand business requirements. Design and Development of Cloud solutions using Databricks Spark or Snowflake to support efficient data analytic models: Create Derived and Business ready Datasets, Extracts and system integration with open source tools. Production Implementation and production support of Big Data solutions; Investigate and troubleshoot production issues and provide fixes. Take ownership of tasks and proactively identify and communicate any potential issues/risks and impacts. Analyze, design and support various change requests/fast-track requirements, also understand and adopt rapid changing business requirements. All you need is... Min. Bachelor's degree in Science/IT/Computing or equivalent. 3-7 years total experience in development mainly around Scala or Python and all related technologies. Proficiency in Spark 2.x applications in Scala or Python. Proficiency in writing Hive SQL batch jobs and scripting. 3+ Experience in developing applications in the Databricks or Certified in Databricks Developer skills. Or has 3+ years experience in Snowflake Led Design and development for Databricks or Cloud projects Strong experience in scripting (Shell or Python) Strong experience in SQL based Data Analytical skills. Preferred relevant experience in Cloud projects is plus. Preferred Experience with Streaming on Kafka is plus. Preferred experience with developing applications using Apache Iceberg is a plus. Hadoop/Spark/Java/ Azure certifications is a plus. Excellent written and verbal communication - to communicate with Development and Project Management Leadership. Excellent collaboration and teamwork skills to work within AMDOCS, Client and other 3rd party vendors. Why you will love this job: The chance to serve as a specialist in software and technology. You will take an active role in technical mentoring within the team. We provide stellar benefits from health to dental to paid time off and parental leave!
Posted 1 week ago
8.0 - 12.0 years
10 - 20 Lacs
Pune
Work from Office
Role & responsibilities Hands-on Experience the candidate has a. in Spark - b. in Python (PySpark) or Scala - c. in SQL - 1. candidate willing to take evaluations where he or she needs to write code in Python (PySpark) or Scala based on scenarios given by panelists. 2. Candidate currently or previously part of any Project where he or she was involved in Developing from scratch and maintaining scalable data pipelines using PySpark or Scala for data ingestion, transformation, and loading (ETL). 3.Candidate involved in Designing and implementing data processing tasks, including data merging, enrichment, and aggregation. 4. Candidate involved in Optimizing Spark jobs for performance and efficiency, including query tuning. 5. Knowledge with various Big Data File Formats (e.g., Parquet, ORC, Avro) and storage systems (e.g., HDFS, cloud storage like AWS S3 etc.)
Posted 1 week ago
4.0 years
0 Lacs
Kochi, Kerala, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology Your Role And Responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Preferred Education Master's Degree Required Technical And Professional Expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred Technical And Professional Experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers Show more Show less
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology Your Role And Responsibilities As a BigData Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets In This Role, Your Responsibilities May Include As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs Preferred Education Master's Degree Required Technical And Professional Expertise Big Data Developer, Hadoop, Hive, Spark, PySpark, Strong SQL. Ability to incorporate a variety of statistical and machine learning techniques. Basic understanding of Cloud (AWS,Azure, etc) . Ability to use programming languages like Java, Python, Scala, etc., to build pipelines to extract and transform data from a repository to a data consumer Ability to use Extract, Transform, and Load (ETL) tools and/or data integration, or federation tools to prepare and transform data as needed. Ability to use leading edge tools such as Linux, SQL, Python, Spark, Hadoop and Java Preferred Technical And Professional Experience Basic understanding or experience with predictive/prescriptive modeling skills You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Show more Show less
Posted 1 week ago
0.0 - 3.0 years
0 Lacs
Gurugram, Haryana
On-site
Role Description: Sr. Data Engineer – Big Data The ideal candidate is a hands-on technology developer with experience in developing scalable applications and platforms. They must be at ease working in an agile environment with little supervision. The person should be a self-motivated person with a passion for problem solving and continuous learning. Role and responsibilities •Strong technical, analytical, and problem-solving skills •Strong organizational skills, with the ability to work autonomously as well as in a team-based environment • Data pipeline framework development Technical skills requirements The candidate must demonstrate proficiency in, •CDH On-premise for data processing and extraction •Ability to own and deliver on large, multi-faceted projects •Fluency in complex SQL and experience with RDBMSs • Project Experience in CDH experience, Spark, PySpark, Scala, Python, NiFi, Hive, NoSql DBs) Experience designing and building big data pipelines Experience working on large scale, distributed systems \ •Strong hands-on experience of programming language like PySpark, Scala with Spark, Python. Certification in Hadoop/Big Data – Hortonworks/Cloudera •Unix or Shell scripting •Strong delivery background across the delivery of high-value, business-facing technical projects in major organizations •Experience of managing client delivery teams, ideally coming from a Data Engineering / Data Science environment Job Types: Full-time, Permanent Pay: ₹600,000.00 - ₹2,000,000.00 per year Benefits: Health insurance Life insurance Provident Fund Schedule: Day shift Ability to commute/relocate: Gurugram, Haryana: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Are you serving notice period at your current organization? Education: Bachelor's (Required) Experience: Python: 3 years (Required) Work Location: In person
Posted 1 week ago
0.0 - 3.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Role Description: Sr. Data Engineer – Big Data The ideal candidate is a hands-on technology developer with experience in developing scalable applications and platforms. They must be at ease working in an agile environment with little supervision. The person should be a self-motivated person with a passion for problem solving and continuous learning. Designation: Sr. Lead Data Engineer: 4 - 6 Years Role and responsibilities Strong technical, analytical, and problem-solving skills Strong organizational skills, with the ability to work autonomously as well as in a team-based environment Data pipeline framework development Technical skills requirements The candidate must demonstrate proficiency in, CDH On-premise for data processing and extraction Ability to own and deliver on large, multi-faceted projects Fluency in complex SQL and experience with RDBMSs ProjectExperienceinCDHexperience,Spark,PySpark,Scala,Python,NiFi,Hive,NoSqlDBs)Experience designing and building big data pipelines Experience working on large scale, distributed systems Strong hands-on experience of programming language like PySpark, Scala with Spark, Python. Certification in Hadoop/Big Data – Hortonworks/Cloudera Unix or Shell scripting Strong delivery background across the delivery of high-value, business-facing technical projects in major organizations Experience of managing client delivery teams, ideally coming from a Data Engineering / Data Science environment Qualifications B.Tech./M.Tech./MS or BCA/MCA degree from a reputed university We are an Equal Opportunity Employer Job Type: Full-time Pay: Up to ₹2,000,000.00 per year Benefits: Health insurance Provident Fund Schedule: Day shift Fixed shift Monday to Friday Ability to commute/relocate: Bangalore, Karnataka: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Are you serving notice period at your current organisation Education: Bachelor's (Required) Experience: python: 3 years (Required) Work Location: In person
Posted 1 week ago
3.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Job Description Team - TMS & WMS (Bangalore) Key Accountabilities Design, implement, test, deploy and maintain innovative software solutions to transform service performance, durability, cost, and security. Use software engineering best practices to ensure a high standard of quality for all of the team deliverables. Write high quality distributed system software. Work in an agile, startup-like development environment, where you are always working on the most important stuff. In this role you will lead a critical and highly-visible function within DP World International Expansion Business. You will be given the opportunity to autonomously deliver the technical direction of the service, and the feature roadmap. You will work with extraordinary talent and have the opportunity to hire and shape the team to best execute on the product. Other Applicable if role has direct reports - responsible for the management and leadership of an engaged team, promoting collaboration and ensuring that each is developed and evaluated against goals and objectives which are aligned, specific, measurable, attainable yet challenging, realistic and time bound. Act as an ambassador for DP World at all times when working; promoting and demonstrating positive behaviors in harmony with DP World’s Founder’s Principles, values and culture; ensuring the highest level of safety is applied in all activities; understanding and following DP World’s Code of Conduct and Ethics policies Perform other related duties as assigned Basic Qualifications QUALIFICATIONS, EXPERIENCE AND SKILLS Bachelor’s Degree in Computer Science or related field, or equivalent experience to a Bachelor's degree based on 3 years of work experience for every 1 year of education 8+ years professional experience in software development; you will be able to discuss in depth both the design and your significant contributions to one or more projects Solid understanding of computer science fundamentals: data structure, algorithm, distributed system design, database, and design patterns. Strong coding skills with a modern language (NodeJS, GoLang, Scala, Java, etc) Experience working in an Agile/Scrum environment and DevOps automation REST, JavaScript/Typescript, Node, GraphQL, PostgreSQL, MongoDB, Redis, Angular, ReactJS, Vue, AWS, machine learning, geolocation and mapping API Preferred Qualifications Experience with distributed system performance analysis and optimization Familiar with AWS services (RDS, DynamoDB, Lamda, Kinesis, SNS, CloudWatch, …) Experience in NLP, Deep learning & Machine Learning. Experience in training machine learning models or developing machine learning infrastructure. Strong communications skills; you will be required to proactively engage colleagues both inside and outside of your team. Ability to effectively articulate technical challenges and solutions Deal well with ambiguous/undefined problems; ability to think abstractly Show more Show less
Posted 1 week ago
5.0 - 10.0 years
27 - 37 Lacs
Pune
Work from Office
Excellent in SDLC Processes Ability to participate in deep technical discussions with the customers and elicit the requirements. Ability to work on tools including Java/Scala/Python/Spark/SQL Imm - 30 days joiners
Posted 1 week ago
7.0 years
0 Lacs
Greater Kolkata Area
Remote
As a global leader in cybersecurity, CrowdStrike protects the people, processes and technologies that drive modern organizations. Since 2011, our mission hasn’t changed — we’re here to stop breaches, and we’ve redefined modern security with the world’s most advanced AI-native platform. We work on large scale distributed systems, processing almost 3 trillion events per day. We have 3.44 PB of RAM deployed across our fleet of C* servers - and this traffic is growing daily. Our customers span all industries, and they count on CrowdStrike to keep their businesses running, their communities safe and their lives moving forward. We’re also a mission-driven company. We cultivate a culture that gives every CrowdStriker both the flexibility and autonomy to own their careers. We’re always looking to add talented CrowdStrikers to the team who have limitless passion, a relentless focus on innovation and a fanatical commitment to our customers, our community and each other. Ready to join a mission that matters? The future of cybersecurity starts with you. About The Role The charter of the Data + ML Platform team is to harness all the data that is ingested and cataloged within the Data LakeHouse for exploration, insights, model development, ML Engineering and Insights Activation. This team is situated within the larger Data Platform group, which serves as one of the core pillars of our company. We process data at a truly immense scale. Our processing is composed of various facets including threat events collected via telemetry data, associated metadata, along with IT asset information, contextual information about threat exposure based on additional processing, etc. These facets comprise the overall data platform, which is currently over 200 PB and maintained in a hyper scale Data Lakehouse, built and owned by the Data Platform team. The ingestion mechanisms include both batch and near real-time streams that form the core Threat Analytics Platform used for insights, threat hunting, incident investigations and more. As an engineer in this team, you will play an integral role as we build out our ML Experimentation Platform from the ground up. You will collaborate closely with Data Platform Software Engineers, Data Scientists & Threat Analysts to design, implement, and maintain scalable ML pipelines that will be used for Data Preparation, Cataloging, Feature Engineering, Model Training, and Model Serving that influence critical business decisions. You’ll be a key contributor in a production-focused culture that bridges the gap between model development and operational success. Future plans include generative AI investments for use cases such as modeling attack paths for IT assets. What You’ll Do Help design, build, and facilitate adoption of a modern Data+ML platform Modularize complex ML code into standardized and repeatable components Establish and facilitate adoption of repeatable patterns for model development, deployment, and monitoring Build a platform that scales to thousands of users and offers self-service capability to build ML experimentation pipelines Leverage workflow orchestration tools to deploy efficient and scalable execution of complex data and ML pipelines Review code changes from data scientists and champion software development best practices Leverage cloud services like Kubernetes, blob storage, and queues in our cloud first environment What You’ll Need B.S. in Computer Science, Data Science, Statistics, Applied Mathematics, or a related field and 7 + years related experience; or M.S. with 5+ years of experience; or Ph.D with 6+ years of experience. 3+ years experience developing and deploying machine learning solutions to production. Familiarity with typical machine learning algorithms from an engineering perspective (how they are built and used, not necessarily the theory); familiarity with supervised / unsupervised approaches: how, why, and when and labeled data is created and used 3+ years experience with ML Platform tools like Jupyter Notebooks, NVidia Workbench, MLFlow, Ray, Vertex AI etc. Experience building data platform product(s) or features with (one of) Apache Spark, Flink or comparable tools in GCP. Experience with Iceberg is highly desirable. Proficiency in distributed computing and orchestration technologies (Kubernetes, Airflow, etc.) Production experience with infrastructure-as-code tools such as Terraform, FluxCD Expert level experience with Python; Java/Scala exposure is recommended. Ability to write Python interfaces to provide standardized and simplified interfaces for data scientists to utilize internal Crowdstrike tools Expert level experience with CI/CD frameworks such as GitHub Actions Expert level experience with containerization frameworks Strong analytical and problem solving skills, capable of working in a dynamic environment Exceptional interpersonal and communication skills. Work with stakeholders across multiple teams and synthesize their needs into software interfaces and processes. Experience With The Following Is Desirable Go Iceberg Pinot or other time-series/OLAP-style database Jenkins Parquet Protocol Buffers/GRPC VJ1 Benefits Of Working At CrowdStrike Remote-friendly and flexible work culture Market leader in compensation and equity awards Comprehensive physical and mental wellness programs Competitive vacation and holidays for recharge Paid parental and adoption leaves Professional development opportunities for all employees regardless of level or role Employee Resource Groups, geographic neighbourhood groups and volunteer opportunities to build connections Vibrant office culture with world class amenities Great Place to Work Certified™ across the globe CrowdStrike is proud to be an equal opportunity employer. We are committed to fostering a culture of belonging where everyone is valued for who they are and empowered to succeed. We support veterans and individuals with disabilities through our affirmative action program. CrowdStrike is committed to providing equal employment opportunity for all employees and applicants for employment. The Company does not discriminate in employment opportunities or practices on the basis of race, color, creed, ethnicity, religion, sex (including pregnancy or pregnancy-related medical conditions), sexual orientation, gender identity, marital or family status, veteran status, age, national origin, ancestry, physical disability (including HIV and AIDS), mental disability, medical condition, genetic information, membership or activity in a local human rights commission, status with regard to public assistance, or any other characteristic protected by law. We base all employment decisions--including recruitment, selection, training, compensation, benefits, discipline, promotions, transfers, lay-offs, return from lay-off, terminations and social/recreational programs--on valid job requirements. If you need assistance accessing or reviewing the information on this website or need help submitting an application for employment or requesting an accommodation, please contact us at recruiting@crowdstrike.com for further assistance. Show more Show less
Posted 1 week ago
5.0 - 8.0 years
10 - 20 Lacs
Pune, Chennai, Mumbai (All Areas)
Hybrid
Hello Connections , Exciting Opportunity Alert !! We're on the hunt for passionate individuals to join our dynamic team as Data Engineer Job Profile : Data Engineers Experience : Minimum 6 to Maximum 9 Yrs of exp Location : Chennai / Hyderabad / Bangalore / Gurgaon / Pune Mandatory Skills : Big Data | Hadoop | pyspark | spark | sparkSql | Hive Qualification : B.TECH / B.E / MCA / Computer Science Background - Any Specification How to Apply? Send your CV to: sipriyar@sightspectrum.in Contact Number - 6383476138 Don't miss out on this amazing opportunity to accelerate your professional career! #bigdata #dataengineer#hadoop#spark #python #hive #pysaprk
Posted 1 week ago
5.0 - 10.0 years
20 - 30 Lacs
Pune
Hybrid
Come work at a place where innovation and teamwork come together to support the most exciting missions in the world! Description: We are seeking a talented Sr. Big Data Engineer to deliver roadmap features of Enterprise TruRisk Platform which would help customers to Measure, Communicate and Eliminate Cyber Risks. Working with a team of engineers and architects, you will be responsible for prototyping, designing, developing and supporting a highly scalable, distributed SaaS based Security Risk Prioritization product. This is a fantastic opportunity to be an integral part of a team building Qualys next generation platform using Big Data & Micro-Services based technology to process over billions of transactions data per day, leverage open-source technologies, and work on challenging and business-impacting initiatives. Responsibilities: Be the thought leader in data platform and pipeline along with Risk Evaluation. Provide technical leadership to the engineering organization on data platform design, roll out and evolution. Liason to product teams, professional services and sales engineers on solution and trade-off reviews and represent engineering in such conversations. Drive technology explorations and roadmaps. Serve as a technical lead on our most demanding, cross-functional departments. Ensure the quality of architecture and design of systems. Functionally decompose complex problems into simple, straight-forward solutions. Fully and completely understand system interdependencies and limitations. Possess expert knowledge in performance, scalability, enterprise system architecture, and engineering best practices. Leverage knowledge of internal and industry prior art in design decisions. Effectively research and benchmark cloud technology against other competing systems in the industry. Able to document the details so it will be easy for developers to understand the requirements. Assisting developers with proper requirements and directions. Assist in the career development of others, actively mentoring individuals and the community on advanced technical issues and helping managers guide the career growth of their team members. Exert technical influence over multiple teams, increasing their productivity and effectiveness by sharing your deep knowledge and experience. Able to share knowledge and train others. Qualifications: Bachelors degree in computer science or equivalent 6+ years of total experience. 2+ years of relevant experience in design and architecture Big Data solutions using Spark 3+ years experience in working with engineering resources for innovation. 4+ years experience in understanding Big Data events flow pipeline. 3+ years experience in performance testing for large infrastructure. 3+ In depth experience in understanding various search solutions solr/elastic. 3+ years experience in Kafka In depth experience in Data lakes and related ecosystems. In depth experience of messing queue In depth experience in giving requirements to build a scalable architecture for Big data and Micro-services environments. In depth experience in understanding caching components or services Knowledge in Presto technology. Knowledge in Airflow. Hands-on experience in scripting and automation In depth understanding of RDBMS/NoSQL, Oracle , Cassandra , Kafka , Redis, Hadoop, lambda architecture, kappa , kappa ++ architectures with flink data streaming and rule engines Experience in working with ML models engineering and related deployment. Design and implement secure big data clusters to meet many compliances and regulatory requirements. Experience in leading the delivery of large-scale systems focused on managing the infrastructure layer of the technology stack. Strong experience in doing performance benchmarking testing for Big data technologies. Strong troubleshooting skills. Experience leading development life cycle process and best practices Experience in Big Data services administration would be added value. Experience with Agile Management (SCRUM, RUP, XP), OO Modeling, working on internet, UNIX, Middleware, and database related projects. Experience mentoring/training the engineering community on complex technical issue. Project management experience
Posted 1 week ago
5.0 - 10.0 years
27 - 37 Lacs
Hyderabad
Work from Office
Excellent in SDLC Processes Ability to participate in deep technical discussions with the customers and elicit the requirements. Ability to work on tools including Java/Scala/Python/Spark/SQL Imm - 30 days joiners
Posted 1 week ago
5.0 - 10.0 years
27 - 37 Lacs
Bangalore Rural
Work from Office
Excellent in SDLC Processes Ability to participate in deep technical discussions with the customers and elicit the requirements. Ability to work on tools including Java/Scala/Python/Spark/SQL Imm - 30 days joiners
Posted 1 week ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description Responsibilities: Implement data pipelines that meet design and are efficient, scalable, and maintainable Implement best practices including proper use of source control, participation in code reviews, data validation and testing Timely deliveries while working on projects Act as advisor/mentor and helps junior data engineers in their deliverables Must Have Skills: Should have experience of at least 4+ years with Data Engineering Strong experience of design, implementation and fine-tuning big data processing pipelines in production environment Experience with big tools like Hadoop, Spark, Kafka, Hive, Databricks Experience in programming at least one of with Python, Java, Scala, Shell Script Experience with relational SQL and NO SQL databases like PostgresSQL, MYSQL, Cassandra etc. Experience with any data visualization tool (Plotly, Tableau, Power BI, Google Data Studio, Quick sight etc.) Good To Have Skills: Should have Basic Knowledge of CI/CD Pipeline Experience in working on at least one Cloud (AWS or Azure or GCP) For AWS: - Experience with AWS Cloud services like EC2, S3, EMR, RDS, Athena, Glue, Lambda, EMR For Azure: -Experience with Azure Cloud services like Azure Blob/Data Lake GEN2, Delta Lake, Databricks, Azure SQL, Azure DevOps, Azure Data Factory, Power BI For GCP: - Experience with GCP Cloud services Big Query, Cloud Storage bucket, DataProc, Dataflow, Pub Sub, Cloud Function, Data Studio Sound familiarity in Versioning tools (Git, SVN etc.) Experience Mentoring students is desirable Knowledge of latest developments in Machine Learning, Deep Learning, Optimization in Automotive domain. Open minded approach to explore multiple algorithms to design optimal solution. History of contribution to articles/blogs/whitepapers etc. in Analytics History of contribution to Open Source. Required Skills Data Engineering,Hadoop,Kafka,CI/CD,Cloud Supported Skills Show more Show less
Posted 1 week ago
130.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description R3 Senior Manager – Data and Analytics Architect The Opportunity Based in Hyderabad, join a global healthcare biopharma company and be part of a 130- year legacy of success backed by ethical integrity, forward momentum, and an inspiring mission to achieve new milestones in global healthcare. Be part of an organisation driven by digital technology and data-backed approaches that support a diversified portfolio of prescription medicines, vaccines, and animal health products. Drive innovation and execution excellence. Be a part of a team with passion for using data, analytics, and insights to drive decision-making, and which creates custom software, allowing us to tackle some of the world's greatest health threats. Our Technology Centre’s focus on creating a space where teams can come together to deliver business solutions that save and improve lives. An integral part of our company’s IT operating model, Tech Centers are globally distributed locations where each IT division has employees to enable our digital transformation journey and drive business outcomes. These locations, in addition to the other sites, are essential to supporting our business and strategy. A focused group of leaders in each Tech Center helps to ensure we can manage and improve each location, from investing in growth, success, and well-being of our people, to making sure colleagues from each IT division feel a sense of belonging to managing critical emergencies. And together, we must leverage the strength of our team to collaborate globally to optimize connections and share best practices across the Tech Centers. Role Overview We are seeking a highly motivated and hands-on Data & Analytics Architect to join our Strategy & Architecture team within CDNA. This mid-level role will play a critical part in designing scalable, reusable, and secure data and analytics solutions across the enterprise. You will work under the guidance of a senior architect and be directly involved in the implementation of architectural patterns, reference solutions, and technical best practices. This is a highly technical role, ideal for someone who enjoys problem-solving, building frameworks, and working in a fast-paced, collaborative environment. What Will You Do In This Role Partner with senior architects to define and implement modern data architecture patterns and reusable frameworks. Design and develop reference implementations for ingestion, transformation, governance, and analytics using tools such as Databricks (must-have), Informatica, AWS Glue, S3, Redshift, and DBT. Contribute to the development of a consistent and governed semantic layer, ensuring alignment in business logic, definitions, and metrics across the enterprise. Work closely with product line teams to ensure architectural compliance, scalability, and interoperability. Build and optimize batch and real-time data pipelines, applying best practices in data modeling, transformation, and metadata management. Contribute to architecture governance processes, participate in design reviews, and document architectural decisions. Support mentoring of junior engineers and help foster a strong technical culture within the India-based team. What Should You Have Bachelor’s degree in information technology, Computer Science or any Technology stream. 5–8 years of experience in data architecture, data engineering, or analytics solution delivery. Proven hands-on experience with Databricks (must), Informatica, AWS data ecosystem (S3, Glue, Redshift, etc.), and DBT. Solid understanding of semantic layer design, including canonical data models and standardized metric logic for enterprise reporting and analytics. Proficient in SQL, Python, or Scala. Strong grasp of data modeling techniques (relational, dimensional, NoSQL), ETL/ELT design, and streaming data frameworks. Knowledge of data governance, data security, lineage, and compliance best practices. Strong collaboration and communication skills across global and distributed teams. Experience with Dataiku or similar data science/analytics platforms is a plus. Exposure to AI/ML and GenAI use cases is advantageous. Background in pharmaceutical, healthcare, or life sciences industries is preferred. Familiarity with API design, data services, and event-driven architecture is beneficial. Our technology teams operate as business partners, proposing ideas and innovative solutions that enable new organizational capabilities. We collaborate internationally to deliver services and solutions that help everyone be more productive and enable innovation. Who We Are We are known as Merck & Co., Inc., Rahway, New Jersey, USA in the United States and Canada and MSD everywhere else. For more than a century, we have been inventing for life, bringing forward medicines and vaccines for many of the world's most challenging diseases. Today, our company continues to be at the forefront of research to deliver innovative health solutions and advance the prevention and treatment of diseases that threaten people and animals around the world. What We Look For Imagine getting up in the morning for a job as important as helping to save and improve lives around the world. Here, you have that opportunity. You can put your empathy, creativity, digital mastery, or scientific genius to work in collaboration with a diverse group of colleagues who pursue and bring hope to countless people who are battling some of the most challenging diseases of our time. Our team is constantly evolving, so if you are among the intellectually curious, join us—and start making your impact today. #HYDIT2025 Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status Regular Relocation VISA Sponsorship Travel Requirements Flexible Work Arrangements Hybrid Shift Valid Driving License Hazardous Material(s) Required Skills Business Enterprise Architecture (BEA), Business Process Modeling, Data Modeling, Emerging Technologies, Requirements Management, Solution Architecture, Stakeholder Relationship Management, Strategic Planning, System Designs Preferred Skills Job Posting End Date 06/15/2025 A job posting is effective until 11 59 59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID R341138 Show more Show less
Posted 1 week ago
2.0 years
0 Lacs
India
On-site
Job description Must Have: Minimum 2 years of experience in developing Java applications Experience with Spring Boot/Spring Experience with Microservices development Professional, precise communication skills Experience in REST API development Experience with MySQL/PostgreSQL Experience in troubleshooting and resolving issues in existing applications. Nice to Have: Knowledge of web application development in SCALA Knowledge of Play Framework Hands-on experience with AWS/Azure/GC Knowledge of HTML/CSS/JS & Typescript Category: Software Development Education UG: B.Tech/B.E. in Any Specialization Key Skills Java, Spring Boot, MVC, Spring, Rest Api Development, MySQL, SCALA Web Application Development, AWS, Play Framework Show more Show less
Posted 1 week ago
5.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job description: Job Description Role Purpose The purpose of this role is to design, test and maintain software programs for operating systems or applications which needs to be deployed at a client end and ensure its meet 100% quality assurance parameters ͏ Do 1. Instrumental in understanding the requirements and design of the product/ software Develop software solutions by studying information needs, studying systems flow, data usage and work processes Investigating problem areas followed by the software development life cycle Facilitate root cause analysis of the system issues and problem statement Identify ideas to improve system performance and impact availability Analyze client requirements and convert requirements to feasible design Collaborate with functional teams or systems analysts who carry out the detailed investigation into software requirements Conferring with project managers to obtain information on software capabilities ͏ 2. Perform coding and ensure optimal software/ module development Determine operational feasibility by evaluating analysis, problem definition, requirements, software development and proposed software Develop and automate processes for software validation by setting up and designing test cases/scenarios/usage cases, and executing these cases Modifying software to fix errors, adapt it to new hardware, improve its performance, or upgrade interfaces. Analyzing information to recommend and plan the installation of new systems or modifications of an existing system Ensuring that code is error free or has no bugs and test failure Preparing reports on programming project specifications, activities and status Ensure all the codes are raised as per the norm defined for project / program / account with clear description and replication patterns Compile timely, comprehensive and accurate documentation and reports as requested Coordinating with the team on daily project status and progress and documenting it Providing feedback on usability and serviceability, trace the result to quality risk and report it to concerned stakeholders ͏ 3. Status Reporting and Customer Focus on an ongoing basis with respect to project and its execution Capturing all the requirements and clarifications from the client for better quality work Taking feedback on the regular basis to ensure smooth and on time delivery Participating in continuing education and training to remain current on best practices, learn new programming languages, and better assist other team members. Consulting with engineering staff to evaluate software-hardware interfaces and develop specifications and performance requirements Document and demonstrate solutions by developing documentation, flowcharts, layouts, diagrams, charts, code comments and clear code Documenting very necessary details and reports in a formal way for proper understanding of software from client proposal to implementation Ensure good quality of interaction with customer w.r.t. e-mail content, fault report tracking, voice calls, business etiquette etc Timely Response to customer requests and no instances of complaints either internally or externally ͏ Deliver No. Performance Parameter Measure 1.Continuous Integration, Deployment & Monitoring of Software100% error free on boarding & implementation, throughput %, Adherence to the schedule/ release plan2.Quality & CSATOn-Time Delivery, Manage software, Troubleshoot queries, Customer experience, completion of assigned certifications for skill upgradation3.MIS & Reporting100% on time MIS & report generation Mandatory Skills: Scala programming . Experience: 5-8 Years . Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome. Show more Show less
Posted 1 week ago
6.0 - 10.0 years
8 - 12 Lacs
Pune, Gurugram, Bengaluru
Work from Office
Contractual Hiring manager :- My profile :- linkedin.com/in/yashsharma1608 Payroll of :- https://www.nyxtech.in/ 1. AZURE DATA ENGINEER WITH FABRIC The Role : Lead Data Engineer PAYROLL Client - Brillio About Role: Experience 6 to 8yrs Location- Bangalore , Hyderabad , Pune , Chennai , Gurgaon (Hyderabad is preferred) Notice- 15 days / 30 days. Budget -15 LPA AZURE FABRIC EXP MANDATE Skills : Azure Onelake, datapipeline , Apache Spark , ETL , Datafactory , Azure Fabric , SQL , Python/Scala. Key Responsibilities: Data Pipeline Development: Lead the design, development, and deployment of data pipelines using Azure OneLake, Azure Data Factory, and Apache Spark, ensuring efficient, scalable, and secure data movement across systems. ETL Architecture: Architect and implement ETL (Extract, Transform, Load) workflows, optimizing the process for data ingestion, transformation, and storage in the cloud. Data Integration: Build and manage data integration solutions that connect multiple data sources (structured and unstructured) into a cohesive data ecosystem. Use SQL, Python, Scala, and R to manipulate and process large datasets. Azure OneLake Expertise: Leverage Azure OneLake and Azure Synapse Analytics to design and implement scalable data storage and analytics solutions that support big data processing and analysis. Collaboration with Teams: Work closely with Data Scientists, Data Analysts, and BI Engineers to ensure that the data infrastructure supports analytical needs and is optimized for performance and accuracy. Performance Optimization: Monitor, troubleshoot, and optimize data pipeline performance to ensure high availability, fast processing, and minimal downtime. Data Governance & Security: Implement best practices for data governance, data security, and compliance within the Azure ecosystem, ensuring data privacy and protection. Leadership & Mentorship: Lead and mentor a team of data engineers, promoting a collaborative and high-performance team culture. Oversee code reviews, design decisions, and the implementation of new technologies. Automation & Monitoring: Automate data engineering workflows, job scheduling, and monitoring to ensure smooth operations. Use tools like Azure DevOps, Airflow, and other relevant platforms for automation and orchestration. Documentation & Best Practices: Document data pipeline architecture, data models, and ETL processes, and contribute to the establishment of engineering best practices, standards, and guidelines. C Innovation: Stay current with industry trends and emerging technologies in data engineering, cloud computing, and big data analytics, driving innovation within the team.C
Posted 1 week ago
4.0 - 6.0 years
15 - 25 Lacs
Pune, Chennai, Bengaluru
Work from Office
Role: Data Engineer Experience: 4-6 yrs Location: Chennai,Bangalore,Pune,Hyderabad,Kochi,Bhubaneshwar Required Skillset =>Should have experience in Pyspark =>Shoud have experience in AWS Glue Interested candidates can send resume to jegadheeswari.m@spstaffing.in or reach me @9566720836
Posted 1 week ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Let’s be unstoppable together! At Circana, we are fueled by our passion for continuous learning and growth, we seek and share feedback freely, and we celebrate victories both big and small in an environment that is flexible and accommodating to our work and personal lives. We have a global commitment to diversity, equity, and inclusion as we believe in the undeniable strength that diversity brings to our business, employees, clients, and communities. With us, you can always bring your full self to work. Join our inclusive, committed team to be a challenger, own outcomes, and stay curious together. Circana is proud to be Certified™ by Great Place To Work®. This prestigious award is based entirely on what current employees say about their experience working at Circana. Learn more at www.circana.com Role & Responsibilities Evaluate domain, financial and technical feasibility of solution ideas with help of all key stakeholders Design, develop, and maintain highly scalable data processing applications Write efficient, reusable and well documented code Deliver big data projects using Spark, Scala , Python, SQL Maintain and tune existing Spark applications to the fullest. Find opportunities for optimizing existing spark applications. Work closely with QA, Operations and various teams to deliver error free software on time Actively lead / participate daily agile / scrum meetings Take responsibility for Apache Spark development and implementation Translate complex technical and functional requirements into detailed designs Investigate alternatives for data storing and processing to ensure implementation of the most streamlined solutions Serve as a mentor for junior staff members by conducting technical training sessions and reviewing project outputs Qualifications Engineering graduates in computer science backgrounds preferred with 8+ years of software development experience with Hadoop framework components(HDFS, Spark, Spark, Scala, PySpark) Excellent at verbal, written and presentation skills Ability to present and defend a solution with technical facts & business proficiency Understanding of data-warehousing and data-modeling techniques Strong data engineering skills Knowledge of Core Java, Linux, SQL, and any scripting language At least 6+ years of experience using Python / Scala, Spark, SQL Knowledge of shell scripting is a plus Knowledge of Core and Advance Java is a plus. Experience in developing and tuning spark applications Excellent understanding of spark architecture, data frames and tuning spark Strong knowledge of database concepts, systems architecture, and data structures is a must Process oriented with strong analytical and problem solving skills Experience in writing Python standalone applications dealing with PySpark API Knowledge of DELTA.IO package is a plus. Note:- “An offer of employment may be conditional upon successful completion of a background check in accordance with local legislation and our candidate privacy notice . Your current employer will not be contacted without your permission” Show more Show less
Posted 1 week ago
7.0 years
0 Lacs
Chandigarh, India
Remote
Experience : 7.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' clients - Upland Software) What do you need for this opportunity? Must have skills required: DevOps, AWS Powershell/AWS CLI, Java/Scala/Golang, Terraform Upland Software is Looking for: Opportunity Summary: We are looking for an enthusiastic and dynamic individual to join Upland India as a DevOps Engineer in the Cloud Operations Team. The individual will manage and monitor our extensive set of cloud applications. The successful candidate will possess extensive experience with production systems with an excellent understanding of key SaaS technologies as well as exhibit a high amount of initiative and responsibility. The candidate will participate in technical/architectural discussions supporting Upland’s product and influence decisions concerning solutions and techniques within their discipline. What would you do? Be an engaged, active member of the team, contributing to driving greater efficiency and optimization across our environments. Automate manual tasks to improve performance and reliability. Build, install, and configure servers in physical and virtual environments. Participate in an on-call rotation to support customer-facing application environments. Monitor and optimize system performance, taking proactive measures to prevent issues and reactive measures to correct them. Participate in the Incident, Change, Problem, and Project Management programs and document details within prescribed guidelines. Advise technical and business teams on tactical and strategic improvements to enhance operational capabilities. Create and maintain documentation of enterprise infrastructure topology and system configurations. Serve as an escalation for internal support staff to resolve issues. What are we looking for? Experience: Overall, 7-9 years total experience in DevOps: AWS (solutioning and operations), GitHub/Bitbucket, CI/CD, Jenkins, ArgoCD, Grafana, Prometheus, etc. Technical Skills To be a part of this journey, you should have 7-9 years of overall industry experience managing production systems, an excellent understanding of key SaaS technologies, and a high level of initiative and responsibility. The following skills are needed for this role. Primary Skills: Public Cloud Providers: AWS: Solutioning, introducing new services in existing infrastructure, and maintaining the infrastructure in a production 24x7 SaaS solution. Administer complex Linux-based web hosting configuration components, including load balancers, web, and database servers. Develop and maintain CI/CD pipelines using GitHub Actions, ArgoCD, and Jenkins. EKS/Kubernetes, ECS, Docker Administration/Deployment. Strong knowledge of AWS networking concepts including: Route53, VPC configuration and management, DHCP, VLANs, HTTP/HTTPS and IPSec/SSL VPNs. Strong knowledge of AWS Security concepts: AWS: IAM accounts, KMS managed encryption, CloudTrail, CloudWatch monitoring/alerting. Automating existing manual workload like reporting, patching/updating servers by writing scripts, lambda functions, etc. Expertise in Infrastructure as Code technologies: Terraform is a must. Monitoring and alerting tools like Prometheus, Grafana, PagerDuty, etc. Expertise in Windows and Linux OS is a must. Secondary Skills: It would be advantageous if the candidate also has the following secondary skills: Strong knowledge of scripting/coding with Go, PowerShell, Bash, or Python. Soft Skills: Strong written and verbal communication skills directed to technical and non-technical team members. Willingness to take ownership of problems and seek solutions. Ability to apply creative problem solving and manage through ambiguity. Ability to work under remote supervision and with a minimum of direct oversight. Qualification Bachelor’s degree in computer science, Engineering, or a related field. Proven experience as a DevOps Engineer with a focus on AWS. Experience with modernizing legacy applications and improving deployment processes. Excellent problem-solving skills and the ability to work under remote supervision. Strong written and verbal communication skills, with the ability to articulate technical information to non-technical team members. About Upland Upland Software (Nasdaq: UPLD) helps global businesses accelerate digital transformation with a powerful cloud software library that provides choice, flexibility, and value. Upland India is a fully owned subsidiary of Upland Software and headquartered in Bangalore. We are a remote-first company. Interviews and on-boarding are conducted virtually. Upland Software is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status or other legally protected status. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 week ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
A Typical Day Design, develop, and maintain ETL processes and data pipelines with Scala/PySpark, ensuring seamless integration of healthcare data formats such as HL7 and FHIR. Collaborate with data scientists, analysts, and healthcare stakeholders to understand data requirements and model Electronic Health Records (EHR) and Electronic Case Report (ECR) for high-quality, compliant data solutions. Optimize and tune data pipelines for performance and scalability, ensuring rapid access to critical healthcare information. Ensure data quality and integrity through robust testing and validation processes, adhering to healthcare regulations. Implement data governance and security best practices to protect sensitive patient information. Monitor and troubleshoot data pipelines to ensure continuous data flow, promptly addressing any issues to maintain operational efficiency. Stay up-to-date with the latest trends and technologies in data engineering, particularly in the healthcare sector. Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Scala is a popular programming language that is widely used in India, especially in the tech industry. Job seekers looking for opportunities in Scala can find a variety of roles across different cities in the country. In this article, we will dive into the Scala job market in India and provide valuable insights for job seekers.
These cities are known for their thriving tech ecosystem and have a high demand for Scala professionals.
The salary range for Scala professionals in India varies based on experience levels. Entry-level Scala developers can expect to earn around INR 6-8 lakhs per annum, while experienced professionals with 5+ years of experience can earn upwards of INR 15 lakhs per annum.
In the Scala job market, a typical career path may look like: - Junior Developer - Scala Developer - Senior Developer - Tech Lead
As professionals gain more experience and expertise in Scala, they can progress to higher roles with increased responsibilities.
In addition to Scala expertise, employers often look for candidates with the following skills: - Java - Spark - Akka - Play Framework - Functional programming concepts
Having a good understanding of these related skills can enhance a candidate's profile and increase their chances of landing a Scala job.
Here are 25 interview questions that you may encounter when applying for Scala roles:
As you explore Scala jobs in India, remember to showcase your expertise in Scala and related skills during interviews. Prepare well, stay confident, and you'll be on your way to a successful career in Scala. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2