Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Summary: We are seeking a highly skilled Lead Data Engineer/Associate Architect to lead the design, implementation, and optimization of scalable data architectures. The ideal candidate will have a deep understanding of data modeling, ETL processes, cloud data solutions, and big data technologies. You will work closely with cross-functional teams to build robust, high-performance data pipelines and infrastructure to enable data-driven decision-making. Experience: 8 - 12+ years Work Location: Hyderabad (Hybrid) Mandatory skills: Python, SQL, Snowflake Contract to Hire - 6+ months Responsibilities: Design and Develop scalable and resilient data architectures that support business needs, analytics, and AI/ML workloads. Data Pipeline Development: Design and implement robust ETL/ELT processes to ensure efficient data ingestion, transformation, and storage. Big Data & Cloud Solutions: Architect data solutions using cloud platforms like AWS, Azure, or GCP, leveraging services such as Snowflake, Redshift, BigQuery, and Databricks. Database Optimization: Ensure performance tuning, indexing strategies, and query optimization for relational and NoSQL databases. Data Governance & Security: Implement best practices for data quality, metadata management, compliance (GDPR, CCPA), and security. Collaboration & Leadership: Work closely with data engineers, analysts, and business stakeholders to translate business requirements into scalable solutions. Technology Evaluation: Stay updated with emerging trends, assess new tools and frameworks, and drive innovation in data engineering. Required Skills: Education: Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field. Experience: 8 - 12+ years of experience in data engineering Cloud Platforms: Strong expertise in AWS data services. Big Data Technologies: Experience with Hadoop, Spark, Kafka, and related frameworks. Databases: Hands-on experience with SQL, NoSQL, and columnar databases such as PostgreSQL, MongoDB, Cassandra, and Snowflake. Programming: Proficiency in Python, Scala, or Java for data processing and automation. ETL Tools: Experience with tools like Apache Airflow, Talend, DBT, or Informatica. Machine Learning & AI Integration (Preferred): Understanding of how to architect data solutions for AI/ML applications Show more Show less
Posted 1 week ago
7.0 years
0 Lacs
India
Remote
Job Title -Data Engineer with API Development (Remote) Exp - 7+ years Location- Remote/Hybrid Shift timing- 11am to 8:30pm Contract- 6 months extendable JOB DESCRIPTION 7+ Years of Overall IT experience. 3+ Years of experience - Azure Architecture / Engineering (one or more of: Azure Functions, App Services, API Development) 3+ Years of experience - Development (ex. Python, C#, Go or other) 1+ CI/CD Experience (GitHub preferred) 2+ Years of API Development experience (Creating APIs and Consuming APIs) Nice to Have - Service Bus - Terraform - ADF - Data Bricks - Spark - Scala Show more Show less
Posted 1 week ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Data Architect – Telecom Domain About the Role: We are seeking an experienced Telecom Data Architect to join our team. In this role, you will be responsible for designing comprehensive data architecture and technical solutions specifically for telecommunications industry challenges, leveraging TMforum frameworks and modern data platforms. You will work closely with customers, and technology partners to deliver data solutions that address complex telecommunications business requirements including customer experience management, network optimization, revenue assurance, and digital transformation initiatives. Responsibilities: Design and articulate enterprise-scale telecom data architectures incorporating TMforum standards and frameworks, including SID (Shared Information/Data Model), TAM (Telecom Application Map), and eTOM (enhanced Telecom Operations Map) Develop comprehensive data models aligned with TMforum guidelines for telecommunications domains such as Customer, Product, Service, Resource, and Partner management Create data architectures that support telecom-specific use cases including customer journey analytics, network performance optimization, fraud detection, and revenue assurance Design solutions leveraging Microsoft Azure and Databricks for telecom data processing and analytics Conduct technical discovery sessions with telecom clients to understand their OSS/BSS architecture, network analytics needs, customer experience requirements, and digital transformation objectives Design and deliver proof of concepts (POCs) and technical demonstrations showcasing modern data platforms solving real-world telecommunications challenges Create comprehensive architectural diagrams and implementation roadmaps for telecom data ecosystems spanning cloud, on-premises, and hybrid environments Evaluate and recommend appropriate big data technologies, cloud platforms, and processing frameworks based on telecom-specific requirements and regulatory compliance needs. Design data governance frameworks compliant with telecom industry standards and regulatory requirements (GDPR, data localization, etc.) Stay current with the latest advancements in data technologies including cloud services, data processing frameworks, and AI/ML capabilities Contribute to the development of best practices, reference architectures, and reusable solution components for accelerating proposal development Qualifications: Bachelor's or Master's degree in Computer Science, Telecommunications Engineering, Data Science, or a related technical field 10+ years of experience in data architecture, data engineering, or solution architecture roles with at least 5 years in telecommunications industry Deep knowledge of TMforum frameworks including SID (Shared Information/Data Model), eTOM, TAM, and their practical implementation in telecom data architectures Demonstrated ability to estimate project efforts, resource requirements, and implementation timelines for complex telecom data initiatives Hands-on experience building data models and platforms aligned with TMforum standards and telecommunications business processes Strong understanding of telecom OSS/BSS systems, network management, customer experience management, and revenue management domains Hands-on experience with data platforms including Databricks, and Microsoft Azure in telecommunications contexts Experience with modern data processing frameworks such as Apache Kafka, Spark and Airflow for real-time telecom data streaming Proficiency in Azure cloud platform and its respective data services with an understanding of telecom-specific deployment requirements Knowledge of system monitoring and observability tools for telecommunications data infrastructure Experience implementing automated testing frameworks for telecom data platforms and pipelines Familiarity with telecom data integration patterns, ETL/ELT processes, and data governance practices specific to telecommunications Experience designing and implementing data lakes, data warehouses, and machine learning pipelines for telecom use cases Proficiency in programming languages commonly used in data processing (Python, Scala, SQL) with telecom domain applications Understanding of telecommunications regulatory requirements and data privacy compliance (GDPR, local data protection laws) Excellent communication and presentation skills with ability to explain complex technical concepts to telecom stakeholders Strong problem-solving skills and ability to think creatively to address telecommunications industry challenges Good to have TMforum certifications or telecommunications industry certifications Relevant data platform certifications such as Databricks, Azure Data Engineer are a plus Willingness to travel as required Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
We are seeking a highly skilled and motivated Senior Data Engineer with expertise in Databricks and Azure to join our team. As a Senior Data Engineer, you will be responsible for designing, developing and maintaining our data lakehouse and pipelines. You will work closely with the Data & Analytics teams to ensure efficient data flow and enable data-driven decision-making. The ideal candidate will have a strong background in data engineering, experience with Databricks, Azure Data Factory and other Azure services and a passion for working with large-scale data sets. Role Description Design, develop and maintain the solutions required for data processing, storage and retrieval. Create scalable, reliable and efficient data pipelines that enable data developers and engineers, data analysts and business stakeholders to access and analyze large volumes of data. Closely collaborates with other team members and Product Owner. Job Requirements Key Responsibilities Collaborate with the Product Owner, Business analyst and other team members to understand requirements and design scalable data pipelines and architectures. Build and maintain data ingestion, transformation and storage processes using Databricks and Azure services. Develop efficient ETL/ELT workflows to extract, transform and load data from various sources into data lakes. Design solutions and drive implementation for enhancing, improving and securing Data Lakehouse. Optimize and fine-tune data pipelines for performance, reliability and scalability. Implement data quality checks and monitoring to ensure data accuracy and integrity. Work with data developers, engineers and data analysts to provide them with the necessary data infrastructure and tools for analysis and reporting. Troubleshoot and resolve data-related issues, including performance bottlenecks and data inconsistencies. Stay up to date with the latest trends and technologies in data engineering and recommend improvements to existing systems and processes. Skillset Highly self-motivated, work Independently, assume ownership and results oriented. A desire and interest to stay up to date with the latest changes in Databricks, Azure and related data platform technologies. Time-management skills and the ability to establish reasonable and attainable deadlines for resolution . Strong programming skills in languages such as SQL, Python, Scala or Spark. Experience working with Databricks and Azure services, such as Azure Data Lake Storage, Azure Data Factory, Azure Databricks, Azure SQL Database and Azure Synapse Analytics. Proficiency in data modeling, database design and Spark SQL query optimization. Familiarity with big data technologies and frameworks like Hadoop, Spark and Hive. Familiarity with data governance and security best practices. Knowledge of data integration patterns and tools. Understanding of cloud computing concepts and distributed computing principles. Excellent problem-solving and analytical skills. Strong communication and collaboration skills to work effectively in an agile team environment. Ability to handle multiple tasks and prioritize work in a fast-paced and dynamic environment. Qualifications Bachelor's degree in Computer Science, Engineering or a related field. 4+ years of proven experience as a Data Engineer, with a focus on designing and building data pipelines. Experience in working with big and complex data environments. Certifications in Databricks or Azure services is a plus. Experience with data streaming technologies such as Apache Kafka or Azure Event Hubs is a plus. Company description Here at SoftwareOne, we give you the flexibility to unleash your creativity, without limits. We encourage autonomy and thinking outside the box - and we can’t wait to hear your new ideas., and although all businesses say it, we truly believe in work - life harmony. Our people are our greatest asset, and we’ll go the extra mile to ensure you’re happy here. We want our people to be their true authentic selves at all times, because that’s when real creativity happens. At SoftwareOne, we believe that our people are our greatest asset. We offer: A flexible work environment that encourages creativity and innovation. Opportunities for professional growth and development. An inclusive team culture where your ideas are valued and your contributions make a difference. The chance to work on ambitious projects that push the boundaries of technology. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Details 1Role -Senior Developer 2Required Technical Skill Set - Spark/Scala/Unix 3Desired Experience Range -5-8 years 4Location of Requirement - Pune Desired Competencies (Technical/Behavioral Competency) Must-Have** (Ideally should not be more than 3-5) Minimum 4+ years of experience in development of Spark Scala Experience in designing and development of solutions for Big Data using Hadoop ecosystem technologies such as with Hadoop Bigdata components like HDFS, Spark, Hive Parquet File format, YARN, MapReduce, Sqoop Good Experience in writing and optimizing Spark Jobs, Spark SQL etc. Should have worked on both batch and streaming data processing. Experience in writing and optimizing complex Hive and SQL queries to process huge data. good with UDFs, tables, joins, Views etc Experience in debugging the Spark code Working knowledge of basic UNIX commands and shell script Experience of Autosys, Gradle Good-to-Have Good analytical and debugging skills Ability to coordinate with SMEs, stakeholders, manage timelines, escalation & provide on time status Write clear and precise documentation / specification Work in an agile environment Create documentation and document all developed mappings SN Responsibility of / Expectations from the Role 1 Create Scala/Spark jobs for data transformation and aggregation 2 Produce unit tests for Spark transformations and helper methods 3 Write Scaladoc-style documentation with all code 4 Design data processing pipelines Show more Show less
Posted 1 week ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About the Role We are looking for a highly skilled Senior Data Engineer with strong expertise in Apache Spark and Databricks to join our growing data engineering team. You will be responsible for designing, developing, and optimizing scalable data pipelines and applications using modern cloud data technologies. This is a hands-on role requiring deep technical knowledge, strong problem-solving skills, and a passion for building efficient, high-performance data solutions that drive business value. Responsibilities: Design, develop, and implement scalable data pipelines and applications using Apache Spark and Databricks, adhering to industry best practices. Perform in-depth performance tuning and optimization of Spark applications within the Databricks environment. Troubleshoot complex issues related to data ingestion, transformation, and pipeline execution. Collaborate with cross-functional teams including data scientists, analysts, and architects to deliver end-to-end data solutions. Continuously evaluate and adopt new technologies and tools in the Databricks and cloud ecosystem. Optimize Databricks cluster configurations for cost-effectiveness and performance. Apply data engineering principles to enable high-quality data ingestion, transformation, and delivery processes. Document technical designs, development processes, and operational procedures. Qualifications: Bachelor’s degree in Computer Science, Engineering, or a related field. 10+ years of experience in data engineering or big data development. 5+ years of hands-on experience with Apache Spark and Databricks. Deep understanding of Spark internals, Spark Streaming, and Delta Lake. Experience in developing solutions using Azure Data Services including: Azure Databricks, Azure Data Factory, Azure DevOps, Azure Functions, Azure SQL Database, Azure Event Grid, Cosmos DB. Familiarity with Striim or similar real-time data integration platforms is a plus. Proficient in PySpark or Scala. Strong experience in performance tuning, cost optimization, and cluster management in Databricks. Solid understanding of data warehousing, ETL/ELT pipelines, and data modelling. Experience working with cloud platforms (Azure preferred; AWS/GCP is a plus). Familiarity with Agile/Scrum methodologies. Preferred Qualifications Databricks Certified Professional Data Engineer certification is a strong plus. Strong communication skills—both written and verbal—with the ability to convey technical concepts to non-technical stakeholders. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
India
Remote
Job Title: Azure Data Engineer Experience Required: 5+ Years Location: Remote Employment Type: Full-time Job Summary: We are looking for a skilled Azure Data Engineer with 5 years of experience in building and optimizing data pipelines and architectures using Azure services. The ideal candidate will be proficient in big data processing, ETL/ELT pipelines, and Azure-based data solutions. You will work closely with data architects, analysts, and business stakeholders to ensure data quality, availability, and performance. Key Responsibilities: Design, develop, and maintain scalable and efficient data pipelines using Azure Data Factory , Databricks , and Azure Synapse Analytics Ingest data from multiple sources (structured, semi-structured, and unstructured) into Azure Data Lake / Data Warehouse Build and optimize ETL/ELT workflows for data transformation and integration Ensure data integrity and implement monitoring, logging, and alerting for pipelines Collaborate with data scientists and analysts to support advanced analytics and machine learning use cases Develop and maintain CI/CD pipelines for data solutions using tools like Azure DevOps Implement data security , governance , and compliance best practices Performance tuning and query optimization of SQL-based solutions on Azure Required Skills: Strong experience with Azure Data Factory , Azure Synapse , Azure Data Lake Storage (ADLS) , and Azure Databricks Solid hands-on experience in SQL , PySpark , Python , or Scala Proficiency in designing and implementing data models , partitioning , and data lake architectures Experience with Azure SQL Database , Cosmos DB , or SQL Server Knowledge of Azure DevOps , Git , and CI/CD processes Understanding of Delta Lake , Parquet , ORC , and file formats used in big data Familiarity with data governance frameworks and security models on Azure Preferred Qualifications: Azure certification: Microsoft Certified: Azure Data Engineer Associate (DP-203) Experience working in Agile/Scrum environments Experience integrating data from on-premises to cloud environments Familiarity with Power BI , Azure Monitor , Log Analytics , or Terraform for infrastructure provisioning Show more Show less
Posted 1 week ago
50.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are hiring for Digit88 About Digit88 Digit88 empowers digital transformation for innovative and high growth B2B and B2C SaaS companies as their trusted offshore software product engineering partner! We are a lean mid-stage software company, with a team of 75+ fantastic technologists, backed by executives with deep understanding of and extensive experience in consumer and enterprise product development across large corporations and startups. We build highly efficient and effective engineering teams that solve real and complex problems for our partners. With more than 50+ years of collective experience in areas ranging from B2B and B2C SaaS, web and mobile apps, e-commerce platforms and solutions, custom enterprise SaaS platforms and domains spread across Conversational AI, Chatbots, IoT, Health-tech, ESG/Energy Analytics, Data Engineering, the founding team thrives in a fast paced and challenging environment that allows us to showcase our best. The Vision: To be the most trusted technology partner to innovative software product companies world-wide The Opportunity Digit88 development team is establishing a new offshore product development team for its partner , that is building next-generation Big Data, Cloud-Based Business Operation Support technology for utilities, retail energy suppliers and Community Choice Aggregators (CCA). The candidate would be joining an existing team of outstanding data engineers in the US and help us expand the data engineering team and work on different products and on different layers of the infrastructure. Job Profile Digit88 is looking for a Big Data Engineer who will work on building, and managing Big Data Pipelines for us to deal with the huge structured data sets that we use as an input to accurately generate analytics at scale for our valued Customers. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining,implementing, and monitoring them. You will also be responsible for integrating them with the architecture used across the company. Applicants must have a passion for engineering with accuracy and efficiency, be highly motivated and organized, able to work as part of a team, and also possess the ability to work independently with minimal supervision. To be successful in this role, you should possess Collaborate closely with Product Management and Engineering leadership to devise and build the right solution. Participate in Design discussions and brainstorming sessions to select, integrate, and maintain Big Data tools and frameworks required to solve Big Data problems at scale. Design and implement systems to cleanse, process, and analyze large data sets using distributed processing tools like Akka and Spark. Understanding and critically reviewing existing data pipelines, and coming up with ideas in collaboration with Technical Leaders and Architects to improve upon current bottlenecks Take initiatives, and show the drive to pick up new stuff proactively, and work as a Senior Individual contributor on the multiple products and features we have. 8+ years of experience in developing highly scalable Big Data pipelines. Hands on exp in team leading and leading product or module development experience In-depth understanding of the Big Data ecosystem including processing frameworks like Spark, Akka, Storm, and Hadoop, and the file types they deal with. Experience with ETL and Data pipeline tools like Apache NiFi, Airflow etc. Excellent coding skills in Java or Scala, including the understanding to apply appropriate Design Patterns when required. Experience with Git and build tools like Gradle/Maven/SBT. Strong understanding of object-oriented design, data structures, algorithms, profiling, and optimization. Have elegant, readable, maintainable and extensible code style. You are someone who would easily be able to Work closely with the US and India engineering teams to help build the Java/Scala based data pipelines Lead the India engineering team in technical excellence and ownership of critical modules; own the development of new modules and features Troubleshoot live production server issues. Handle client coordination and be able to work as a part of a team, be able to contribute independently and drive the team to exceptional contributions with minimal team supervision Follow Agile methodology, JIRA for work planning, issue management/tracking Additional Project/Soft Skills: Should be able to work independently with India & US based team members. Strong verbal and written communication with ability to articulate problems and solutions over phone and emails. Strong sense of urgency, with a passion for accuracy and timeliness. Ability to work calmly in high pressure situations and manage multiple projects/tasks. Ability to work independently and possess superior skills in issue resolution. Should have the passion to learn and implement, analyse and troubleshoot issues Show more Show less
Posted 1 week ago
0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
About the Role You’ll work directly with the founder. Learning fast, owning small but meaningful pipeline tasks, and shipping production code exactly to spec. What You’ll Do In this role you’ll build and ship ETL/ELT pipelines in Python or Scala, crafting and tuning the necessary SQL transformations, while closely following my design documents and verbal briefs and iterating quickly on feedback until the output matches requirements. You’ll keep the codebase healthy by working through Git feature branches and pull requests, adding unit tests, and adhering to our pre-commit hooks. Day-to-day work will involve operating across AWS services such as EMR/Spark as projects demand. Learning is continuous: we’ll pair regularly for reviews and debugging, and you’ll present your progress during short weekly catch-ups. Must-Have Basics Up to 6 months practical experience (internship, project, or personal lab) in data engineering Working knowledge of Python or Scala and solid SQL Basic Git workflow familiarity Conceptual understanding of big-data tooling (Spark/Hadoop) Exposure to at least core AWS storage/compute services Strong willingness to take direction, ask questions, and iterate quickly Reside in Ahmedabad and commit to full-time office work Nice-to-Haves Docker or Airflow familiarity Data-modeling (star/snowflake, SCD) basics Hackathon or open-source contributions Compensation & Perks ₹15,000 – ₹30,000 / month (intern / junior band) Direct 1-on-1 mentorship from a senior data engineer & founder Dedicated learning budget after 90 days Comfortable workspace, high-end dev laptop, free coffee/snacks How to Apply Apply with your résumé (PDF). In the note, share a link to code or briefly describe a data project you built. Shortlisted candidates will have an on-site interview (python and SQL discussions) Location : S.G.Highway, Ahmedabad Timing : 8-9 hours (Flexible) Experience : 0 to 6 months If you’re hungry to learn, enjoy clear guidance, and want to grow into a full-stack data engineer, I’d love to hear from you. Show more Show less
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
About The Role Grade Level (for internal use): 09 S&P Global Market Intelligence The Role: Software Developer II (.Net Backend Developer) Grade ( relevant for internal applicants only ): 9 The Location: Ahmedabad, Gurgaon, Hyderabad The Team: S&P Global Market Intelligence, a best-in-class sector-focused news and financial information provider, is looking for a Software Developer to join our Software Development team in our India offices. This is an opportunity to work on a self-managed team to maintain, update, and implement processes utilized by other teams. Coordinate with stakeholders to design innovative functionality in existing and future applications. Work across teams to enhance the flow of our data. What’s In It For You This is the place to hone your existing skills while having the chance to be exposed to fresh and divergent technologies. Exposure to work on the latest, cutting-edge technologies in the full stack eco system. Opportunity to grow personally and professionally. Exposure in working on AWS Cloud solutions will be added advantage. Responsibilities Identify, prioritize, and execute tasks in Agile software development environment. Develop solutions to develop/support key business needs. Engineer components and common services based on standard development models, languages and tools. Produce system design documents and participate actively in technical walkthroughs. Demonstrate a strong sense of ownership and responsibility with release goals. This includes understanding requirements, technical specifications, design, architecture, implementation, unit testing, builds/deployments, and code management. Build and maintain the environment for speed, accuracy, consistency and ‘up’ time. Collaborate with team members across the globe. Interface with users, business analysts, quality assurance testers and other teams as needed. Basic Qualifications What We’re Looking For: Bachelor's/Master’s degree in computer science, Information Systems or equivalent. 3-5 years of experience. Solid experience with building processes; debugging, refactoring, and enhancing existing code, with an understanding of performance and scalability. Competency in C#, .NET, .NET CORE. Experience with DevOps practices and modern CI/CD deployment models using Jenkins Experience supporting production environments Knowledge of T-SQL and MS SQL Server Exposure to Python/scala/AWS technologies is a plus Exposure to React/Angular is a plus Preferred Qualifications Exposure to DevOps practices and CI/CD pipelines such as Azure DevOps or GitHub Actions. Familiarity with automated unit testing is advantageous. Exposure in working on AWS Cloud solutions will be added to an advantage. About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence. What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 316163 Posted On: 2025-06-09 Location: Ahmedabad, Gujarat, India Show more Show less
Posted 1 week ago
10.0 - 15.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Position Overview Job Title: Full Stack, AVP Location: Bangalore, India Role Description Responsible for developing, enhancing, modifying and/or maintaining applications in the Enterprise Risk Technology environment. Software developers design, code, test, debug and document programs as well as support activities for the corporate systems architecture. Employees work closely with business partners in defining requirements for system applications. Employees typically have in-depth knowledge of development tools and languages. Is clearly recognized as a content expert by peers. Individual contributor role. Typically requires 10-15 years of applicable experience. Deutsche Bank’s Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international, and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support, and opportunities to excel. You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support. What We’ll Offer You As part of our flexible scheme, here are just some of the benefits that you’ll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Flexible working arrangements Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities Responsible for developing software in Java, object-oriented database and grid using kubernetes & open shift platform. Responsible for building REST web services Responsible for designing interface between UI and REST service. Responsible for building data-grid centric UI. Participating fully in the development process through the entire software lifecycle. Participating fully in agile software development process Use BDD techniques, collaborating closely with users, analysts, developers, and other testers. Make sure we are building the right thing. Write code and write it well. Be proud to call yourself a programmer. Use test driven development, write clean code, and refactor constantly. Make sure we are building the thing right. Be ready to work on a range of technologies and components, including user interfaces, services, and databases. Act as a generalizing specialist. Define and evolve the architecture of the components you are working on and contribute to architectural decisions at a department and bank-wide level. Ensure that the software you build is reliable and easy to support in production. Be prepared to take your turn on call providing 3rd line support when it’s needed Help your team to build, test and release software within short lead times and with minimum of waste. Work to develop and maintain a highly automated Continuous Delivery pipeline. Help create a culture of learning and continuous improvement within your team and beyond Your Skills And Experience Deep Knowledge of at least one modern programming language, along with understanding of both object oriented and functional programming. Ideally knowledge of Java and Scala. Practical experience of test-driven development and constant refactoring in continuous integration environment. Practical experience of web technologies, frameworks and tools like HTML, CSS, JavaScript, React Experience or Exposure to Big Data Hadoop technologies / BI tools will be an added advantage Experience in Oracle PL/SQL programming is required Knowledge of SQL and relational databases Experience working in an agile team, practicing Scrum, Kanban or XP Experience of performing Functional Analysis is highly desirable The ideal candidate will also have: Behavior Driven Development, particularly experience of how it can be used to define requirements in a collaborative manner to ensure the team builds the right thing and create a system of living documentation Good to have range of technologies that store, transport, and manipulate data, for example: NoSQL, document databases, graph databases, Hadoop/HDFS, streaming and messaging Will be Added Advantage if candidate has exposure to Architecture and design approaches that support rapid, incremental, and iterative delivery, such as Domain Driven Design, CQRS, Event Sourcing and micro services. How We’ll Support You Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About Us And Our Teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Bengaluru East, Karnataka, India
On-site
Req ID: 326913 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Data Engineer to join our team in Bangalore, Karnātaka (IN-KA), India (IN). I'm currently looking for a skilled Data Engineer to join our team! If you're passionate about building data pipelines, optimizing ETL processes, and working with cutting-edge technologies, this could be a great fit for you. 💻 Tech Stack: Terraform on AWS, along with Spark and Scala ✅ Strong SQL & Python skills ✅ Experience with cloud platforms (AWS/Azure/GCP) ✅ Prior experience inthe banking/finance domain is a plus! About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here . If you'd like more information on your EEO rights under the law, please click here . For Pay Transparency information, please click here . Show more Show less
Posted 1 week ago
0.0 - 4.0 years
0 Lacs
Basavanagudi, Bengaluru, Karnataka
On-site
We are looking for an Only immediate joiner and e*xperienced Big Data Developer with a strong background in PySpark, Python/Scala, Spark, SQL, and the Hadoop ecosystem. The ideal candidate should have over 4 years of experience and be ready to join immediately.* This role requires hands-on expertise in big data technologies and the ability to design and implement robust data processing solutions. Key Responsibilities: Design, develop, and optimize large-scale data processing pipelines using PySpark. Work with various Apache tools and frameworks (like Hadoop, Hive, HDFS, etc.) to ingest, transform, and manage large datasets. Ensure high performance and reliability of ETL jobs in production. Collaborate with Data Scientists, Analysts, and other stakeholders to understand data needs and deliver robust data solutions. Implement data quality checks and data lineage tracking for transparency and auditability. Work on data ingestion, transformation, and integration from multiple structured and unstructured sources. Leverage Apache NiFi for automated and repeatable data flow management (if applicable). Write clean, efficient, and maintainable code in Python and Java. Contribute to architectural decisions, performance tuning, and scalability planning. Required Skills: 5–7 years of experience. Strong hands-on experience with PySpark for distributed data processing. Deep understanding of Apache ecosystem (Hadoop, Hive, Spark, HDFS, etc.). Solid grasp of data warehousing, ETL principles, and data modeling. Experience working with large-scale datasets and performance optimization. Familiarity with SQL and NoSQL databases. Proficiency in Python and basic to intermediate knowledge of Java. Experience in using version control tools like Git and CI/CD pipelines. Nice-to-Have Skills: Working experience with Apache NiFi for data flow orchestration. Experience in building real-time streaming data pipelines. Knowledge of cloud platforms like AWS, Azure, or GCP. Familiarity with containerization tools like Docker or orchestration tools like Kubernetes. Soft Skills: Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. Self-driven with the ability to work independently and as part of a team. Education: Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related field. Job Type: Full-time Pay: ₹1,000,000.00 - ₹1,700,000.00 per year Benefits: Health insurance Schedule: Day shift Supplemental Pay: Performance bonus Yearly bonus Ability to commute/relocate: Basavanagudi, Bengaluru, Karnataka: Reliably commute or planning to relocate before starting work (Preferred) Application Question(s): Are you ready to join within 15 days? What is your Current CTC ? Experience: Python: 4 years (Preferred) Pyspark: 4 years (Required) Data warehouse: 4 years (Required) Work Location: In person Application Deadline: 12/06/2025
Posted 1 week ago
0.0 - 5.0 years
0 Lacs
Ahmedabad, Gujarat
On-site
.Net Backend Developer Ahmedabad, India Information Technology 316163 Job Description About The Role: Grade Level (for internal use): 09 S&P Global Market Intelligence The Role: Software Developer II (.Net Backend Developer) Grade ( relevant for internal applicants only ): 9 The Location: Ahmedabad, Gurgaon, Hyderabad The Team: S&P Global Market Intelligence, a best-in-class sector-focused news and financial information provider, is looking for a Software Developer to join our Software Development team in our India offices. This is an opportunity to work on a self-managed team to maintain, update, and implement processes utilized by other teams. Coordinate with stakeholders to design innovative functionality in existing and future applications. Work across teams to enhance the flow of our data. What’s in it for you: This is the place to hone your existing skills while having the chance to be exposed to fresh and divergent technologies. Exposure to work on the latest, cutting-edge technologies in the full stack eco system. Opportunity to grow personally and professionally. Exposure in working on AWS Cloud solutions will be added advantage. Responsibilities: Identify, prioritize, and execute tasks in Agile software development environment. Develop solutions to develop/support key business needs. Engineer components and common services based on standard development models, languages and tools. Produce system design documents and participate actively in technical walkthroughs. Demonstrate a strong sense of ownership and responsibility with release goals. This includes understanding requirements, technical specifications, design, architecture, implementation, unit testing, builds/deployments, and code management. Build and maintain the environment for speed, accuracy, consistency and ‘up’ time. Collaborate with team members across the globe. Interface with users, business analysts, quality assurance testers and other teams as needed. What We’re Looking For: Basic Qualifications: Bachelor's/Master’s degree in computer science, Information Systems or equivalent. 3-5 years of experience. Solid experience with building processes; debugging, refactoring, and enhancing existing code, with an understanding of performance and scalability. Competency in C#, .NET, .NET CORE. Experience with DevOps practices and modern CI/CD deployment models using Jenkins Experience supporting production environments Knowledge of T-SQL and MS SQL Server Exposure to Python/scala/AWS technologies is a plus Exposure to React/Angular is a plus Preferred Qualifications: Exposure to DevOps practices and CI/CD pipelines such as Azure DevOps or GitHub Actions. Familiarity with automated unit testing is advantageous. Exposure in working on AWS Cloud solutions will be added to an advantage. About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 316163 Posted On: 2025-06-09 Location: Ahmedabad, Gujarat, India
Posted 1 week ago
0.0 - 5.0 years
0 Lacs
Ahmedabad, Gujarat
On-site
About the Role: Grade Level (for internal use): 09 S&P Global Market Intelligence The Role: Software Developer II (.Net Backend Developer) Grade ( relevant for internal applicants only ): 9 The Location: Ahmedabad, Gurgaon, Hyderabad The Team: S&P Global Market Intelligence, a best-in-class sector-focused news and financial information provider, is looking for a Software Developer to join our Software Development team in our India offices. This is an opportunity to work on a self-managed team to maintain, update, and implement processes utilized by other teams. Coordinate with stakeholders to design innovative functionality in existing and future applications. Work across teams to enhance the flow of our data. What’s in it for you: This is the place to hone your existing skills while having the chance to be exposed to fresh and divergent technologies. Exposure to work on the latest, cutting-edge technologies in the full stack eco system. Opportunity to grow personally and professionally. Exposure in working on AWS Cloud solutions will be added advantage. Responsibilities: Identify, prioritize, and execute tasks in Agile software development environment. Develop solutions to develop/support key business needs. Engineer components and common services based on standard development models, languages and tools. Produce system design documents and participate actively in technical walkthroughs. Demonstrate a strong sense of ownership and responsibility with release goals. This includes understanding requirements, technical specifications, design, architecture, implementation, unit testing, builds/deployments, and code management. Build and maintain the environment for speed, accuracy, consistency and ‘up’ time. Collaborate with team members across the globe. Interface with users, business analysts, quality assurance testers and other teams as needed. What We’re Looking For: Basic Qualifications: Bachelor's/Master’s degree in computer science, Information Systems or equivalent. 3-5 years of experience. Solid experience with building processes; debugging, refactoring, and enhancing existing code, with an understanding of performance and scalability. Competency in C#, .NET, .NET CORE. Experience with DevOps practices and modern CI/CD deployment models using Jenkins Experience supporting production environments Knowledge of T-SQL and MS SQL Server Exposure to Python/scala/AWS technologies is a plus Exposure to React/Angular is a plus Preferred Qualifications: Exposure to DevOps practices and CI/CD pipelines such as Azure DevOps or GitHub Actions. Familiarity with automated unit testing is advantageous. Exposure in working on AWS Cloud solutions will be added to an advantage. About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence . What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 316163 Posted On: 2025-06-09 Location: Ahmedabad, Gujarat, India
Posted 1 week ago
0.0 - 18.0 years
0 Lacs
Indore, Madhya Pradesh
On-site
Indore, Madhya Pradesh, India Qualification : BTech degree in computer science, engineering or related field of study or 12+ years of related work experience 7+ years design & implementation experience with large scale data centric distributed applications Professional experience architecting, operating cloud-based solutions with good understanding of core disciplines like compute, networking, storage, security, databases etc. Good understanding of data engineering concepts like storage, governance, cataloging, data quality, data modeling etc. Good understanding about various architecture patterns like data lake, data lake house, data mesh etc. Good understanding of Data Warehousing concepts, hands-on experience working with tools like Hive, Redshift, Snowflake, Teradata etc. Experience migrating or transforming legacy customer solutions to the cloud. Experience working with services like AWS EMR, Glue, DMS, Kinesis, RDS, Redshift, Dynamo DB, Document DB, SNS, SQS, Lambda, EKS, Data Zone etc. Thorough understanding of Big Data ecosystem technologies like Hadoop, Spark, Hive, HBase etc. and other competent tools and technologies Understanding in designing analytical solutions leveraging AWS cognitive services like Textract, Comprehend, Rekognition etc. in combination with Sagemaker is good to have. Experience working with modern development workflows, such as git, continuous integration/continuous deployment pipelines, static code analysis tooling, infrastructure-as-code, and more. Experience with a programming or scripting language – Python/Java/Scala AWS Professional/Specialty certification or relevant cloud expertise Skills Required : AWS, Big Data, Spark, Technical Architecture Role : Drive innovation within Data Engineering domain by designing reusable and reliable accelerators, blueprints, and libraries. Capable of leading a technology team, inculcating innovative mindset and enable fast paced deliveries. Able to adapt to new technologies, learn quickly, and manage high ambiguity. Ability to work with business stakeholders, attend/drive various architectural, design and status calls with multiple stakeholders. Exhibit good presentation skills with a high degree of comfort speaking with executives, IT Management, and developers. Drive technology/software sales or pre-sales consulting discussions Ensure end-to-end ownership of all tasks being aligned. Ensure high quality software development with complete documentation and traceability. Fulfil organizational responsibilities (sharing knowledge & experience with other teams / groups) Conduct technical training(s)/session(s), write whitepapers/ case studies / blogs etc. Experience : 10 to 18 years Job Reference Number : 12895
Posted 1 week ago
0.0 - 7.0 years
0 Lacs
Noida, Uttar Pradesh
On-site
Noida, Uttar Pradesh, India;Gurgaon, Haryana, India;Bangalore, Karnataka, India;Indore, Madhya Pradesh, India;Pune, Maharashtra, India Qualification : Job Title: Senior Big Data Cloud QA Job Description: We are seeking an experienced Senior Big Data Cloud Quality Assurance Engineer to join our dynamic team. In this role, you will be responsible for ensuring the quality and performance of our big data applications and services deployed in cloud environments. You will work closely with developers, product managers, and other stakeholders to define testing strategies, develop test plans, and execute comprehensive testing processes. Key Responsibilities: Design and implement test plans and test cases for big data applications in cloud environments. Perform functional, performance, and scalability testing on large datasets. Identify, record, and track defects using bug tracking tools. Collaborate with development teams to understand product requirements and provide feed on potential quality issues early in the development cycle. Develop and maintain automated test scripts and frameworks for continuous integration and deployment. Analyze test results and provide detailed reports on the quality of releases. Mentor junior QA team members and share best practices in testing methodologies and tools. Stay updated on industry trends and advancements in big data and cloud technologies to continuously improve QA processes. Qualifications: Bachelor’s degree in Computer Science, Information Technology, or a related field. Minimum of 5 years of experience in software testing, with at least 2 years focused on big data applications and cloud technologies. Proficiency in testing frameworks and tools, such as JUnit, TestNG, Apache JMeter, or similar. Experience with big data technologies, such as Hadoop, Spark, or distributed databases. Strong understanding of cloud platforms, such as AWS, Azure, or Google Cloud. Familiarity with programming languages such as Java, Python, or Scala. Excellent analytical and problem-solving skills, with a keen attention to detail. Strong communication skills, both verbal and written, along with the ability to work collaboratively in a team environment. If you are a motivated and detail-oriented professional looking to advance your career in big data quality assurance, we encourage you to for this exciting opportunity. Skills Required : ETL Testing, Bigdata, Database Testing, API Testing, Selenium, SQL, Linux, Cloud Testing Role : Job Title: Senior Big Data Cloud QA Roles and Responsibilities: 1. Design and implement comprehensive test plans and test cases for big data applications deployed in cloud environments. 2. Collaborate with data engineers and developers to understand system architecture and data flow for effective testing. 3. Perform manual and automated testing for big data processing frameworks and tools, ensuring data quality and integrity. 4. Lead and mentor junior QA team members, providing guidance on best practices for testing big data solutions. 5. Identify and document defects, track their resolution, and verify fixes in a timely manner. 6. Develop and maintain automated test scripts using appropriate testing frameworks compatible with cloud big data platforms. 7. Execute performance testing to assess the scalability and reliability of big data applications in cloud environments. 8. Participate in design and code reviews, providing insights on testability and quality. 9. Work with stakeholders to define acceptance criteria and ensure that deliverables meet business requirements. 10. Stay updated on industry trends and advancements in big data technologies and cloud services to continually improve testing processes. 11. Ensure compliance with security and data governance policies during testing activities. 12. Provide detailed reports and metrics on testing progress, coverage, and outcomes to project stakeholders. Experience : 5 to 7 years Job Reference Number : 12944
Posted 1 week ago
0.0 - 18.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Bengaluru, Karnataka, India;Indore, Madhya Pradesh, India;Pune, Maharashtra, India;Hyderabad, Telangana, India Qualification : Overall 10-18 yrs. of Data Engineering experience with Minimum 4+ years of hands on experience in Databricks. Ready to travel Onsite and work at client location. Proven hands-on experience as a Databricks Architect or similar role with a deep understanding of the Databricks platform and its capabilities. Analyze business requirements and translate them into technical specifications for data pipelines, data lakes, and analytical processes on the Databricks platform. Design and architect end-to-end data solutions, including data ingestion, storage, transformation, and presentation layers, to meet business needs and performance requirements. Lead the setup, configuration, and optimization of Databricks clusters, workspaces, and jobs to ensure the platform operates efficiently and meets performance benchmarks. Manage access controls and security configurations to ensure data privacy and compliance. Design and implement data integration processes, ETL workflows, and data pipelines to extract, transform, and load data from various sources into the Databricks platform. Optimize ETL processes to achieve high data quality and reduce latency. Monitor and optimize query performance and overall platform performance to ensure efficient execution of analytical queries and data processing jobs. Identify and resolve performance bottlenecks in the Databricks environment. Establish and enforce best practices, standards, and guidelines for Databricks development, ensuring data quality, consistency, and maintainability. Implement data governance and data lineage processes to ensure data accuracy and traceability. Mentor and train team members on Databricks best practices, features, and capabilities. Conduct knowledge-sharing sessions and workshops to foster a data-driven culture within the organization. Will be responsible for Databricks Practice Technical/Partnership initiatives. Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects. Skills Required : Databricks, Unity Catalog, Pyspark, ETL, SQL, Delta Live Tables Role : Bachelor's or Master’s degree in Computer Science, Information Technology, or related field. In depth hands-on implementation knowledge on Databricks. Delta Lake, Delta table - Managing Delta Tables, Databricks Cluster Configuration, Cluster policies. Experience handling structured and unstructured datasets Strong proficiency in programming languages like Python, Scala, or SQL. Experience with Cloud platforms like AWS, Azure, or Google Cloud, and understanding of cloud-based data storage and computing services. Familiarity with big data technologies like Apache Spark, Hadoop, and data lake architectures. Develop and maintain data pipelines, ETL workflows, and analytical processes on the Databricks platform. Should have good experience in Data Engineering in Databricks Batch process and Streaming Should have good experience in creating Workflows & Scheduling the pipelines. Should have good exposure on how to make packages or libraries available in DB. Familiarity in Databricks default runtimes Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Experience : 10 to 18 years Job Reference Number : 12932
Posted 1 week ago
10.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Location: Bangalore - Karnataka, India - EOIZ Industrial Area Worker Type Reference: Regular - Permanent Pay Rate Type: Salary Career Level: T4(A) Job ID: R-45392-2025 Description & Requirements Introduction: A Career at HARMAN HARMAN Technology Services (HTS) We’re a global, multi-disciplinary team that’s putting the innovative power of technology to work and transforming tomorrow. At HARMAN DTS, you solve challenges by creating innovative solutions. Combine the physical and digital, making technology a more dynamic force to solve challenges and serve humanity’s needs Work at the convergence of cross channel UX, cloud, insightful data, IoT and mobility Empower companies to create new digital business models, enter new markets, and improve customer experiences About the Role We are seeking an experienced “Azure Data Architect” who will develop and implement data engineering project including enterprise data hub or Big data platform. Develop and implement data engineering project including data lake house or Big data platform What You Will Do Create data pipelines for more efficient and repeatable data science projects Design and implement data architecture solutions that support business requirements and meet organizational needs Collaborate with stakeholders to identify data requirements and develop data models and data flow diagrams Work with cross-functional teams to ensure that data is integrated, transformed, and loaded effectively across different platforms and systems Develop and implement data governance policies and procedures to ensure that data is managed securely and efficiently Develop and maintain a deep understanding of data platforms, technologies, and tools, and evaluate new technologies and solutions to improve data management processes Ensure compliance with regulatory and industry standards for data management and security. Develop and maintain data models, data warehouses, data lakes and data marts to support data analysis and reporting. Ensure data quality, accuracy, and consistency across all data sources. Knowledge of ETL and data integration tools such as Informatica, Qlik Talend, and Apache NiFi. Experience with data modeling and design tools such as ERwin, PowerDesigner, or ER/Studio Knowledge of data governance, data quality, and data security best practices Experience with cloud computing platforms such as AWS, Azure, or Google Cloud Platform. Familiarity with programming languages such as Python, Java, or Scala. Experience with data visualization tools such as Tableau, Power BI, or QlikView. Understanding of analytics and machine learning concepts and tools. Knowledge of project management methodologies and tools to manage and deliver complex data projects. Skilled in using relational database technologies such as MySQL, PostgreSQL, and Oracle, as well as NoSQL databases such as MongoDB and Cassandra. Strong expertise in cloud-based databases such as AWS 3/ AWS glue , AWS Redshift, Iceberg/parquet file format Knowledge of big data technologies such as Hadoop, Spark, snowflake, databricks , and Kafka to process and analyze large volumes of data. Proficient in data integration techniques to combine data from various sources into a centralized location. Strong data modeling, data warehousing, and data integration skills. What You Need 10+ years of experience in the information technology industry with strong focus on Data engineering, architecture and preferably as data engineering lead 8+ years of data engineering or data architecture experience in successfully launching, planning, and executing advanced data projects. Experience in working on RFP/ proposals, presales activities, business development and overlooking delivery of Data projects is highly desired A master’s or bachelor’s degree in computer science, data science, information systems, operations research, statistics, applied mathematics, economics, engineering, or physics. Candidate should have demonstrated the ability to manage data projects and diverse teams. Should have experience in creating data and analytics solutions. Experience in building solutions with Data solutions in any one or more domains – Industrial, Healthcare, Retail, Communication Problem-solving, communication, and collaboration skills. Good knowledge of data visualization and reporting tools Ability to normalize and standardize data as per Key KPIs and Metrics Develop and implement data engineering project including data lakehouse or Big data platform Develop and implement data engineering project including data lakehouse or Big data platform What is Nice to Have Knowledge of Azure Purview is must Knowledge of Azure Data fabric Ability to define reference data architecture Snowflake Certified in SnowPro Advanced Certification Ability to define reference data architecture Cloud native data platform experience in AWS or Microsoft stack Knowledge about latest data trends including datafabric and data mesh Robust knowledge of ETL and data transformation and data standardization approaches Key contributor on growth of the COE and influencing client revenues through Data and analytics solutions Lead the selection, deployment, and management of Data tools, platforms, and infrastructure. Ability to guide technically a team of data engineers Oversee the design, development, and deployment of Data solutions Define, differentiate & strategize new Data services/offerings and create reference architecture assets Drive partnerships with vendors on collaboration, capability building, go to market strategies, etc. Guide and inspire the organization about the business potential and opportunities around Data Network with domain experts Collaborate with client teams to understand their business challenges and needs. Develop and propose Data solutions tailored to client specific requirements. Influence client revenues through innovative solutions and thought leadership. Lead client engagements from project initiation to deployment. Build and maintain strong relationships with key clients and stakeholders Build re-usable Methodologies, Pipelines & Models What Makes You Eligible Build and manage a high-performing team of Data engineers and other specialists. Foster a culture of innovation and collaboration within the Data team and across the organization. Demonstrate the ability to work in diverse, cross-functional teams in a dynamic business environment. Candidates should be confident, energetic self-starters, with strong communication skills. Candidates should exhibit superior presentation skills and the ability to present compelling solutions which guide and inspire. Provide technical guidance and mentorship to the Data team Collaborate with other stakeholders across the company to align the vision and goals Communicate and present the Data capabilities and achievements to clients and partners Stay updated on the latest trends and developments in the Data domain What We Offer Access to employee discounts on world class HARMAN/Samsung products (JBL, Harman Kardon, AKG etc.). Professional development opportunities through HARMAN University’s business and leadership academies. An inclusive and diverse work environment that fosters and encourages professional and personal development. “Be Brilliant” employee recognition and rewards program. You Belong Here HARMAN is committed to making every employee feel welcomed, valued, and empowered. No matter what role you play, we encourage you to share your ideas, voice your distinct perspective, and bring your whole self with you – all within a support-minded culture that celebrates what makes each of us unique. We also recognize that learning is a lifelong pursuit and want you to flourish. We proudly offer added opportunities for training, development, and continuing education, further empowering you to live the career you want. About HARMAN: Where Innovation Unleashes Next-Level Technology Ever since the 1920s, we’ve been amplifying the sense of sound. Today, that legacy endures, with integrated technology platforms that make the world smarter, safer, and more connected. Across automotive, lifestyle, and digital transformation solutions, we create innovative technologies that turn ordinary moments into extraordinary experiences. Our renowned automotive and lifestyle solutions can be found everywhere, from the music we play in our cars and homes to venues that feature today’s most sought-after performers, while our digital transformation solutions serve humanity by addressing the world’s ever-evolving needs and demands. Marketing our award-winning portfolio under 16 iconic brands, such as JBL, Mark Levinson, and Revel, we set ourselves apart by exceeding the highest engineering and design standards for our customers, our partners and each other. If you’re ready to innovate and do work that makes a lasting impact, join our talent community today! Important Notice: Recruitment Scams Please be aware that HARMAN recruiters will always communicate with you from an '@harman.com' email address. We will never ask for payments, banking, credit card, personal financial information or access to your LinkedIn/email account during the screening, interview, or recruitment process. If you are asked for such information or receive communication from an email address not ending in '@harman.com' about a job with HARMAN, please cease communication immediately and report the incident to us through: harmancareers@harman.com. HARMAN is proud to be an Equal Opportunity / Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, gender (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics.
Posted 1 week ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Greetings! One of our esteemed client Japanese multinational information technology (IT) service and consulting company headquartered in Tokyo, Japan. The company acquired Italy -based Value Team S.p.A. and launched Global One Teams. Join this dynamic, high-impact firm where innovation meets opportunity — and take your career to new height s! 🔍 We Are Hiring: Python, PySpark and SQL Developer (8-12 years) Relevant Exp – 8-12 Years JD - • Python, PySpark and SQL • 8+ years of experience in Spark, Scala, PySpark for big data processing • Proficiency in Python programming for data manipulation and analysis. • Experience with Python libraries such as Pandas, NumPy. • Knowledge of Spark architecture and components (RDDs, DataFrames, Spark SQL). • Strong knowledge of SQL for querying databases. • Experience with database systems like Lakehouse, PostgreSQL, Teradata, SQL Server. • Ability to write complex SQL queries for data extraction and transformation. • Strong analytical skills to interpret data and provide insights. • Ability to troubleshoot and resolve data-related issues. • Strong problem-solving skills to address data-related challenges • Effective communication skills to collaborate with cross-functional teams. Role/Responsibilities: • Work on development activities along with lead activities • Coordinate with the Product Manager (PdM) and Development Architect (Dev Architect) and handle deliverables independently • Collaborate with other teams to understand data requirements and deliver solutions. • Design, develop, and maintain scalable data pipelines using Python and PySpark. • Utilize PySpark and Spark scripting for data processing and analysis • Implement ETL (Extract, Transform, Load) processes to ensure data is accurately processed and stored. • Develop and maintain Power BI reports and dashboards. • Optimize data pipelines for performance and reliability. • Integrate data from various sources into centralized data repositories. • Ensure data quality and consistency across different data sets. • Analyze large data sets to identify trends, patterns, and insights. • Optimize PySpark applications for better performance and scalability. • Continuously improve data processing workflows and infrastructure. Interested candidates, please share your updated resume along with the following details : Total Experience: Relevant Experience in Python, PySpark and SQL: Current Loc Current CTC: Expected CTC: Notice Period: 🔒 We assure you that your profile will be handled with strict confidentiality. 📩 Apply now and be part of this incredible journey Thanks, Syed Mohammad!! syed.m@anlage.co.in Show more Show less
Posted 1 week ago
6.0 - 10.0 years
20 - 30 Lacs
Hyderabad
Hybrid
Position : Big Data Developers (mid to senior level) Location : Hyderabad (Hybrid Mode) Must-Have Skills Big Data (Py Spark + Java/Scala) Preferred Skills: AWS (EMR, S3, Glue, Airflow, RDS, Dynamodb, similar) CICD (Jenkins or another) Relational Databases experience (any) No SQL databases experience (any) Microservices or Domain services or API gateways or similar Containers (Docker, K8s, similar)
Posted 1 week ago
6.0 - 9.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description The Risk division is responsible for credit, market and operational risk, model risk, independent liquidity risk, and insurance throughout the firm. RISK BUSINESS The Risk Business identifies, monitors, evaluates, and manages the firm’s financial and non-financial risks in support of the firm’s Risk Appetite Statement and the firm’s strategic plan. Operating in a fast paced and dynamic environment and utilizing the best in class risk tools and frameworks, Risk teams are analytically curious, have an aptitude to challenge, and an unwavering commitment to excellence. Overview To ensure uncompromising accuracy and timeliness in the delivery of the risk metrics, our platform is continuously growing and evolving. Risk Engineering combines the principles of Computer Science, Mathematics and Finance to produce large scale, computationally intensive calculations of risk Goldman Sachs faces with each transaction we engage in. As an Engineer in the Risk Engineering organization, you will have the opportunity to impact one or more aspects of risk management. You will work with a team of talented engineers to drive the build & adoption of common tools, platforms, and applications. The team builds solutions that are offered as a software product or as a hosted service. We are a dynamic team of talented developers and architects who partner with business areas and other technology teams to deliver high profile projects using a raft of technologies that are fit for purpose (Java, Cloud computing, HDFS, Spark, S3, ReactJS, Sybase IQ among many others). A glimpse of the interesting problems that we engineer solutions for, include acquiring high quality data, storing it, performing risk computations in limited amount of time using distributed computing, and making data available to enable actionable risk insights through analytical and response user interfaces. What We Look For Senior Developer in large projects across a global team of developers and risk managers Performance tune applications to improve memory and CPU utilization. Perform statistical analyses to identify trends and exceptions related Market Risk metrics. Build internal and external reporting for the output of risk metric calculation using data extraction tools, such as SQL, and data visualization tools, such as Tableau. Utilize web development technologies to facilitate application development for front end UI used for risk management actions Develop software for calculations using databases like Snowflake, Sybase IQ and distributed HDFS systems. Interact with business users for resolving issues with applications. Design and support batch processes using scheduling infrastructure for calculation and distributing data to other systems. Oversee junior technical team members in all aspects of Software Development Life Cycle (SDLC) including design, code review and production migrations. Skills And Experience Bachelor’s degree in Computer Science, Mathematics, Electrical Engineering or related technical discipline 6-9 years’ experience is working risk technology team in another bank, financial institution. Experience in market risk technology is a plus. Experience with one or more major relational / object databases. Experience in software development, including a clear understanding of data structures, algorithms, software design and core programming concepts Comfortable multi-tasking, managing multiple stakeholders and working as part of a team Comfortable with working with multiple languages Technologies: Scala, Java, Python, Spark, Linux and shell scripting, TDD (JUnit), build tools (Maven/Gradle/Ant) Experience in working with process scheduling platforms like Apache Airflow. Should be ready to work in GS proprietary technology like Slang/SECDB An understanding of compute resources and the ability to interpret performance metrics (e.g., CPU, memory, threads, file handles). Knowledge and experience in distributed computing – parallel computation on a single machine like DASK, Distributed processing on Public Cloud. Knowledge of SDLC and experience in working through entire life cycle of the project from start to end About Goldman Sachs At Goldman Sachs, we commit our people, capital and ideas to help our clients, shareholders and the communities we serve to grow. Founded in 1869, we are a leading global investment banking, securities and investment management firm. Headquartered in New York, we maintain offices around the world. We believe who you are makes you better at what you do. We're committed to fostering and advancing diversity and inclusion in our own workplace and beyond by ensuring every individual within our firm has a number of opportunities to grow professionally and personally, from our training and development opportunities and firmwide networks to benefits, wellness and personal finance offerings and mindfulness programs. Learn more about our culture, benefits, and people at GS.com/careers. We’re committed to finding reasonable accommodations for candidates with special needs or disabilities during our recruiting process. Learn more: https://www.goldmansachs.com/careers/footer/disability-statement.html © The Goldman Sachs Group, Inc., 2023. All rights reserved. Goldman Sachs is an equal employment/affirmative action employer Female/Minority/Disability/Veteran/Sexual Orientation/Gender Identity Show more Show less
Posted 1 week ago
9.0 - 14.0 years
30 - 37 Lacs
Noida, New Delhi, Gurugram
Work from Office
Primary Responsibilities: Lead the design, development, and maintenance of scalable, robust, and secure backend services using Scala and Java Architect and implement microservices using the Play Framework Deploy, manage, and optimize applications in Kubernetes on AWS Own and manage a complete suite of microservice applications, ensuring high availability and performance Integrate backend services with PostgreSQL databases and data processing systems Utilize Datadog for monitoring, logging, and performance optimization Work with AWS services, including Elastic Beanstalk, for deployment and management of applications Use GitHub for version control and collaboration Lead and participate in the complete software development life cycle (SDLC), including planning, development, testing, and deployment Troubleshoot, debug, and upgrade existing software Document the backend process to aid in future upgrades and maintenance Perform code reviews and mentor junior developers Collaborate with cross-functional teams to define, design, and ship new features Act as a Service Level Owner, ensuring the reliability, performance, and scalability of the services Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or a related field 10+ years of experience in software development using Scala and Java Solid proficiency in Scala and Java programming languages Experience with Datadog for monitoring, logging, and performance optimization Extensive experience with the Play Framework for building microservices Experience with PostgreSQL databases Experience with Agile methodologies (Scrum, Test Driven Development, Continuous Integration) Familiarity with version control systems, particularly GitHub Solid understanding of data structures and algorithms Proficiency in deploying and managing applications in Kubernetes on AWS Proficiency in AWS services, including Elastic Beanstalk Proven excellent problem-solving and analytical skills Proven solid communication and collaboration skills Proven leadership skills and experience mentoring junior developers Proven attention to detail and a commitment to writing clean, maintainable code Demonstrated ability to work independently and as part of a team Demonstrated ability to take ownership and responsibility for the reliability, performance, and scalability of services
Posted 1 week ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description The candidate must possess knowledge relevant to the functional area, and act as a subject matter expert in providing advice in the area of expertise, and also focus on continuous improvement for maximum efficiency. It is vital to focus on the high standard of delivery excellence, provide top-notch service quality and develop successful long-term business partnerships with internal/external customers by identifying and fulfilling customer needs. He/she should be able to break down complex problems into logical and manageable parts in a systematic way, and generate and compare multiple options, and set priorities to resolve problems. The ideal candidate must be proactive, and go beyond expectations to achieve job results and create new opportunities. He/she must positively influence the team, motivate high performance, promote a friendly climate, give constructive feedback, provide development opportunities, and manage career aspirations of direct reports. Communication skills are key here, to explain organizational objectives, assignments, and the big picture to the team, and to articulate team vision and clear objectives. Process Manager Roles And Responsibilities Designing and implementing scalable, reliable, and maintainable data architectures on AWS. Developing data pipelines to extract, transform, and load (ETL) data from various sources into AWS environments. Creating and optimizing data models and schemas for performance and scalability using AWS services like Redshift, Glue, Athena, etc. Integrating AWS data solutions with existing systems and third-party services. Monitoring and optimizing the performance of AWS data solutions, ensuring efficient query execution and data retrieval. Implementing data security and encryption best practices in AWS environments. Documenting data engineering processes, maintaining data pipeline infrastructure, and providing support as needed. Working closely with cross-functional teams including data scientists, analysts, and stakeholders to understand data requirements and deliver solutions. Technical And Functional Skills Typically, a bachelors degree in Computer Science, Engineering, or a related field is required, along with 5+ years of experience in data engineering and AWS cloud environments. Strong experience with AWS data services such as S3, EC2, Redshift, Glue, Athena, EMR, etc Proficiency in programming languages commonly used in data engineering such as Python, SQL, Scala, or Java. Experience in designing, implementing, and optimizing data warehouse solutions on Snowflake/ Amazon Redshift. Familiarity with ETL tools and frameworks (e.g., Apache Airflow, AWS Glue) for building and managing data pipelines. Knowledge of database management systems (e.g., PostgreSQL, MySQL, Amazon Redshift) and data lake concepts. Understanding of big data technologies such as Hadoop, Spark, Kafka, etc., and their integration with AWS. Proficiency in version control tools like Git for managing code and infrastructure as code (e.g., CloudFormation, Terraform). Ability to analyze complex technical problems and propose effective solutions. Strong verbal and written communication skills for documenting processes and collaborating with team members and stakeholders. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Company Overview Viraaj HR Solutions is a leading recruitment agency fostering a culture of excellence and innovation. Our mission is to connect the best talent with the right opportunities, ensuring mutual growth and success. We pride ourselves on our integrity, responsiveness, and commitment to client satisfaction, working relentlessly to understand the unique needs of each business we partner with. Join us in our journey to redefine talent acquisition and contribute to the success of businesses across various sectors. Job Title: AWS Data Engineer Location: On-Site in India Role Responsibilities Design and develop data pipelines using AWS services. Implement ETL processes for data ingestion and transformation. Manage and optimize large-scale distributed data systems. Create and maintain data models that meet business requirements. Collaborate with data scientists and analysts to understand data needs. Ensure data quality and integrity through validation checks. Monitor and troubleshoot data pipeline issues proactively. Implement data security and compliance measures in line with regulations. Analyze system performance and optimize for efficiency. Prepare technical documentation for data processes and architecture. Participate in architecture and design discussions for data solutions. Research and evaluate new AWS tools and technologies. Work closely with cross-functional teams to align data strategies. Provide support for troubleshooting data-related issues. Stay updated on industry trends in data engineering and cloud technology. Qualifications Bachelor's degree in Computer Science, Engineering, or related field. 3+ years of experience as a Data Engineer or in a related role. Proficiency in AWS services such as S3, EC2, Glue, and Redshift. Strong knowledge of SQL and database design principles. Experience with Python and ETL frameworks. Familiarity with data warehousing concepts and solutions. Understanding of data governance and best practices. Hands-on experience with big data technologies such as Hadoop or Spark. Ability to work independently and in a team environment. Excellent problem-solving skills and attention to detail. Strong communication skills to articulate complex technical concepts. Experience in Agile methodologies is a plus. Knowledge of machine learning concepts is an added advantage. Ability to adapt to a fast-paced and evolving environment. Willingness to learn new tools and technologies as needed. Skills: data modeling,cloud computing,scala,aws data engineer,problem-solving,sql proficiency,data analysis,database management,spark,big data technologies (hadoop, spark),sql,etl frameworks,database design,python,aws services (s3, ec2, glue, redshift),data governance,communication,data warehousing Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Scala is a popular programming language that is widely used in India, especially in the tech industry. Job seekers looking for opportunities in Scala can find a variety of roles across different cities in the country. In this article, we will dive into the Scala job market in India and provide valuable insights for job seekers.
These cities are known for their thriving tech ecosystem and have a high demand for Scala professionals.
The salary range for Scala professionals in India varies based on experience levels. Entry-level Scala developers can expect to earn around INR 6-8 lakhs per annum, while experienced professionals with 5+ years of experience can earn upwards of INR 15 lakhs per annum.
In the Scala job market, a typical career path may look like: - Junior Developer - Scala Developer - Senior Developer - Tech Lead
As professionals gain more experience and expertise in Scala, they can progress to higher roles with increased responsibilities.
In addition to Scala expertise, employers often look for candidates with the following skills: - Java - Spark - Akka - Play Framework - Functional programming concepts
Having a good understanding of these related skills can enhance a candidate's profile and increase their chances of landing a Scala job.
Here are 25 interview questions that you may encounter when applying for Scala roles:
As you explore Scala jobs in India, remember to showcase your expertise in Scala and related skills during interviews. Prepare well, stay confident, and you'll be on your way to a successful career in Scala. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.