Jobs
Interviews

6241 Scala Jobs - Page 48

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

We are seeking a skilled Data Engineer to join our growing data team in India. You will be responsible for designing, building, and maintaining scalable data infrastructure and pipelines that enable data-driven decision making across our organization and client projects. This role offers the opportunity to work with cutting-edge technologies and contribute to innovative data solutions for global clients. What you do Technical Skills Minimum 3+ years of experience in data engineering or related field Strong programming skills in Python and/or Scala/Java Experience with SQL and database technologies (PostgreSQL, MySQL, MongoDB) Hands-on experience with data processing frameworks: Apache Spark, Hadoop ecosystem Apache Kafka for streaming data Apache Airflow or similar workflow orchestration tools Knowledge of data warehouse concepts and technologies Experience with containerization (Docker, Kubernetes) Understanding of data modeling principles and best practices Cloud & Platform Experience Experience with at least one major cloud platform (AWS, Azure, or GCP) Familiarity with cloud-native data services: Data lakes, data warehouses, and analytics services Server less computing and event-driven architectures Identity and access management for data systems Knowledge of Infrastructure as Code (Terraform, CloudFormation, ARM templates) Data & Analytics Understanding of data governance and security principles Experience with data quality frameworks and monitoring Knowledge of dimensional modeling and data warehouse design Familiarity with business intelligence and analytics tools Understanding of data privacy regulations (GDPR, CCPA) Preferred Qualifications Advanced Technical Skills Experience with modern data stack tools (dbt, Fivetran, Snowflake, Databricks) Knowledge of machine learning pipelines and MLOps practices Experience with event-driven architectures and microservices Familiarity with data mesh and data fabric concepts Experience with graph databases (Neo4j, Amazon Neptune) Industry Experience Experience in digital agency or consulting environment Background in financial services, e-commerce, retail, or customer experience platforms Knowledge of marketing technology and customer data platforms Experience with real-time analytics and personalization systems Soft Skills Strong problem-solving and analytical thinking abilities Excellent communication skills for client-facing interactions Ability to work independently and manage multiple projects Adaptability to rapidly changing technology landscape Experience mentoring junior team members What we ask Data Infrastructure & Architecture Design and implement robust, scalable data architectures and pipelines Build and maintain ETL/ELT processes for batch and real-time data processing Develop data models and schemas optimized for analytics and reporting Ensure data quality, consistency, and reliability across all data systems Platform-Agnostic Development Work with multiple cloud platforms (AWS, Azure, GCP) based on client requirements Implement data solutions using various technologies and frameworks Adapt quickly to new tools and platforms as project needs evolve Maintain expertise across different cloud ecosystems and services Data Pipeline Development Create automated data ingestion pipelines from various sources (APIs, databases, files, streaming) Implement data transformation logic using modern data processing frameworks Build monitoring and alerting systems for data pipeline health Optimize pipeline performance and cost-efficiency Collaboration & Integration Work closely with data scientists, analysts, and business stakeholders Collaborate with DevOps teams to implement CI/CD for data pipelines Partner with client teams to understand data requirements and deliver solutions Participate in architecture reviews and technical decision-making What we offer You’ll join an international network of data professionals within our organisation. We support continuous development through our dedicated Academy. If you're looking to push the boundaries of innovation and creativity in a culture that values freedom and responsibility, we encourage you to apply. At Valtech, we’re here to engineer experiences that work and reach every single person. To do this, we are proactive about creating workplaces that work for every person at Valtech. Our goal is to create an equitable workplace which gives people from all backgrounds the support they need to thrive, grow and meet their goals (whatever they may be). You can find out more about what we’re doing to create a Valtech for everyone here. Please do not worry if you do not meet all of the criteria or if you have some gaps in your CV. We’d love to hear from you and see if you’re our next member of the Valtech team!

Posted 3 weeks ago

Apply

3.0 - 6.0 years

5 - 8 Lacs

Bengaluru

Work from Office

Tasks Experience Experience in building and managing data pipelines. experience with development and operations of data pipelines in the cloud (Preferably Azure.) Experience with distributed data/computing tools: Map/Reduce, Hadoop, Hive, Spark Deep expertise in architecting and data pipelines in cloud using cloud native technologies. Good experience in both ETL and ELT Ingestion patterns Hands-on experience working on large volumes of data(Petabyte scale) with distributed compute frameworks. Good Understanding of container platforms Kubernetes and docker Excellent knowledge and experience with object-oriented programming Familiarity developing with RESTful API interfaces. Experience in markup languages such as JSON and YAML Proficient in relational database design and development Good knowledge on Data warehousing concepts Working experience with agile scrum methodology Technical Skills Strong skills in distributed cloud Data analytics platforms like Databricks, HD insight, EMR cluster etc. Strong in Programming Skills -Python/Java/R/Scala etc. Experience with stream-processing systems: Kafka, Apache Storm, Spark-Streaming, Apache Flink, etc. Hands-on working knowledge in cloud data lake stores like Azure Data Lake Storage. Data pipeline orchestration with Azure Data Factory, Amazon Data Pipeline Good Knowledge on File Formats like ORC, Parquet, Delta, Avro etc. Good Experience in using SQL and No-SQL. databases like MySQL, Elasticsearch, MongoDB, PostgreSQL and Cassandra running huge volumes of data Strong experience in networking and security measures Proficiency with CI/CD automation, and specifically with DevOps build and release pipelines Proficiency with Git, including branching/merging strategies, Pull Requests, and basic command line functions Strong experience in networking and security measures Good Data Modelling skills Job Responsibilities Cloud Analytics, Storage, security, resiliency and governance Building and maintaining the data architecture for data Engineering and data science projects Extract Transform and Load data from sources Systems to data lake or Datawarehouse leveraging combination of various IaaS or SaaS components Perform compute on huge volume of data using open-source projects like Databricks/spark or Hadoop Define table schema and quickly adapt with the pipeline Working with High volume unstructured and streaming datasets Responsible to manage NoSQL Databases on Cloud (AWS, Azure etc.) Architect solutions to migrate projects from On-premises to cloud Research, investigate and implement newer technologies to continually evolve security capabilities Identify valuable data sources and automate collection processes Implement adequate networking and security measures for the data pipeline Implement monitoring solution for the data pipeline Support the design, and implement data engineering solutions Maintain excellent documentation for understanding and accessing data storage Work independently as well as in teams to deliver transformative solutions to clients Be proactive and constantly pay attention to the scalability, performance and availability of our systems Establishes privacy/security hierarchy and regulates access Collaborate with engineering and product development teams Systematic problem-solving approach with strong communication skills and a sense of ownership and drive Qualifications Bachelors degree or Masters in Computer Science or relevant streams Any Relevant cloud data engineering certification

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Location: Noida, Uttar Pradesh, India (Hybrid/Remote options available) Employment Type: Full-time About Fusionpact Technologies At Fusionpact Technologies, we are at the forefront of leveraging a fusion of cutting-edge technologies to create impactful solutions that drive significant business value for our clients globally. Established in 2022, we specialize in Cloud Services, Artificial Intelligence, Software Development, ERP Solutions, and IT Consulting. Our passion lies in pushing the boundaries of what's possible with technologies like AI/ML, Blockchain, Reactive Architecture, and Cloud-Native solutions . We're a dynamic, agile, and innovation-driven company committed to delivering high-quality, scalable, and secure software that truly makes a difference. With a proven track record across 175+ projects, including innovative products like ForestTwin™ for carbon tech and the ISO Platform for compliance, we are dedicated to transforming businesses and making a brighter world. The Opportunity We're looking for a highly skilled and experienced Tech Lead to join our dynamic engineering team. In this pivotal role, you'll be instrumental in shaping our technical vision, driving the development of next-generation reactive and microservices-based applications, and fostering a culture of technical excellence within our agile development environment. You'll be a key player in designing and implementing robust, scalable, and resilient systems. Your expertise in architectural principles will be crucial in guiding the team and ensuring the successful deployment of high-quality software. We're seeking a leader who can not only leverage strong fundamental knowledge but also expertly integrate and utilize AI tools to deliver superior software solutions in a fast-paced, agile manner. If you thrive on technical challenges, enjoy mentoring, and are excited about the impact of AI on software development, Fusionpact Technologies is the place for you. Responsibilities Technical Leadership & Architecture Lead the design, development, and deployment of complex reactive and microservices-based applications, ensuring adherence to Fusionpact's best practices, architectural principles, and quality standards. Define and enforce coding standards, design patterns, and architectural guidelines across development teams to ensure consistency and maintainability. Conduct rigorous technical reviews and provide constructive feedback to ensure high-quality code, scalable solutions, and optimal performance. Mentor, coach, and guide development teams on advanced architectural concepts, reactive programming paradigms (e.g., Akka), and microservices best practices. Agile Development & AI Integration Drive agile development practices within your scrum team, working closely with the Scrum Master, DevOps, QA, Backend, and Frontend engineers to ensure efficient workflows and timely delivery. Champion the adoption and effective utilization of cutting-edge AI tools (e.g., Cursor AI, GitHub Copilot, or similar generative AI solutions) to enhance code quality, accelerate development cycles, and improve overall team efficiency. Proactively identify opportunities to leverage AI for tasks such as intelligent code generation, automated refactoring, advanced bug detection, and smart automated testing frameworks. Ensure the seamless and effective integration of AI-powered workflows into the existing development pipeline, continuously optimizing the software delivery lifecycle. Project Management & Quality Assurance Effectively manage and contribute to multiple projects simultaneously, consistently delivering superior quality output in line with project timelines and client expectations. Take ownership of the technical success of projects, from initial conception and architectural design to successful deployment and ongoing maintenance. Collaborate with product owners and stakeholders to translate complex business requirements into clear, actionable technical specifications. Ensure the delivery of highly performant, secure, maintainable, and resilient software solutions that meet Fusionpact's high standards. Team Collaboration & Mentorship Foster a collaborative, innovative, and inclusive team environment, encouraging knowledge sharing, continuous learning, and cross-functional synergy. Provide dedicated technical guidance, coaching, and mentorship to junior and mid-level engineers, helping them grow their skills and careers. Champion a culture of continuous learning, staying abreast of emerging technologies, industry trends, and innovative software development methodologies, and bringing these insights back to the team. Required Skills & Experience 8+ years of progressive experience in software development, with at least 3+ years in a Tech Lead or similar leadership role focused on complex distributed systems. Proven hands-on experience in designing, building, and deploying highly available, scalable, and resilient reactive and microservices-based applications. Deep understanding of modern architecture principles, design patterns (e.g., Domain-Driven Design, Event Sourcing, CQRS), and software development best practices. Strong hands-on experience with at least one major programming language extensively used in reactive/microservices development (e.g., Java, Kotlin, Go, or Scala). Strong fundamental knowledge and practical experience leveraging AI tools (e.g., Cursor AI, GitHub Copilot, Tabnine, or similar) to enhance development workflows, improve code quality, and accelerate delivery. Demonstrated ability to effectively manage and contribute to multiple projects simultaneously while maintaining superior quality output. Extensive experience working in a fast-paced, agile (Scrum, Kanban) environment and guiding cross-functional scrum teams (Scrum Master, DevOps, QA, Backend, Frontend). Solid understanding of DevOps principles, CI/CD pipelines, and automated deployment strategies. Excellent communication, interpersonal, and leadership skills, with the ability to articulate complex technical concepts to diverse audiences. Strong ethics and integrity, with a proven ability to thrive and lead effectively in a remote or hybrid work environment. Preferred Qualifications Hands-on experience with Scala and Akka for building reactive systems. Proficiency with cloud platforms such as AWS, Azure, or GCP, including experience with their relevant services for microservices deployment and management. In-depth experience with containerization technologies (Docker, Kubernetes) and orchestration. Familiarity with various data storage technologies (relational databases, NoSQL databases like Cassandra, MongoDB, Redis) and message queues (Kafka, RabbitMQ). Experience with performance tuning, monitoring, and troubleshooting distributed systems. Certifications in relevant cloud platforms or agile methodologies. If you are a passionate and experienced Tech Lead with a strong background in reactive and microservices architectures, a knack for leveraging AI to deliver exceptional software, and a commitment to fostering a high-performing team, we encourage you to apply and become a part of Fusionpact Technologies' innovative journey! Apply Now!

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Description Data is the new Oil !! Are you passionate about data? Does the prospect of dealing with massive volumes of data excite you? Do you want to build big-data solutions that process billions of records a day in a scalable fashion using AWS technologies? Do you want to create the next-generation tools for intuitive data access? If so, Amazon Finance Technology (FinTech) is for you! Amazon's Financial Technology team is looking for a passionate, results-oriented, inventive Data Engineers who can work on massively scalable and distributed systems. The candidate thrives in a fast-paced environment, understands how to deal with large sets of data and transactions and will help us deliver on a new generation of software, leveraging Amazon Web Services. The candidate is passionate about technology and wants to be involved with real business problems. Our platform serves Amazon's finance, tax and accounting functions across the globe. We are looking for an experienced Data Engineer to join the FinTech Tax teams that builds and operate technology for supporting Amazon's tax compliance and audits needs worldwide. Our teams are responsible for building a big-data platform to stream billions of transactions a day, process and to organize them into output required for tax compliance globally. With Amazon's data-driven culture, our platform will provide accurate, timely and actionable data for our customers. As a member of this team, your mission will be to design, develop, document and support massively scalable, distributed real time systems. Using Python, Java, Object oriented design patterns, distributed databases and other innovative storage techniques, you will build and deliver software systems that support complex and rapidly evolving business requirements. You will communicate your ideas effectively to achieve the right outcome for your team and customer. Your code, design, and implementation decisions will set a great example to other engineers. As a senior engineer, you will provide guidance and support for other engineers with industry best practices and direction. You will also have the opportunity to impact the technical decisions in the broader organisation as well as mentor other engineers in the team. Key job responsibilities This is an exciting opportunity for a seasoned Data Engineer to take on a pivotal role in the architecture, design, implementation, and deployment of large-scale, critical, and complex financial applications. You will push your design and architecture skills to the limit by owning all aspects of end-to-end solutions. Leveraging agile methodologies, you will iteratively build and deliver high-quality results in a fast-paced environment. With strong verbal and written communication abilities, self-motivation, and a collaborative mindset, you will work across Amazon engineering teams and business teams globally to plan, design, execute, and implement this new platform across multiple geographies. Throughout the project lifecycle, you will review requirements, design services that lay the foundation for the new technology platform, integrate with existing architectures, develop and test code (Python, Scala, Java), and deliver seamless implementations for Global Tax customers. In a hands-on role, you will manage day-to-day activities, participate in designs, design reviews, and code reviews with the engineering team. Utilizing AWS technologies such as EC2, RDS/DynamoDB/RedShift, S3, EMR, Glue and QuickSight you will build solutions. You will design and code technical solutions to deliver value to tax customers. Additionally, you will contribute to a suite of tools hosted on the AWS infrastructure, working with a variety of tools across the spectrum of the software development lifecycle. About The Team The FinTech International Tax Compliance (FIT Compliance) team oversees the Tax Data Warehouse platform, a large-scale data platform and reporting solution designed for indirect tax compliance across Amazon and other organizations. This platform enables businesses to adhere to mandatory tax regulations, drive data accuracy, and ensure audit readiness, providing consistent and reliable data to tax teams in the EMEA regions. As Amazon expands its operations in EMEA, the FIT Compliance team plays a crucial role in delivering mandatory tax reporting obligations, facilitating these launches. Furthermore, their charter encompasses building solutions to meet evolving tax legislation changes, audit requests, and technology requirements globally, such as International Recon and Digital Service Tax (DST). The team is also investing in building the next generation of strategic platforms such as Unified tax ledger(UTL) and Golden data Set(GDS). Basic Qualifications 3+ years of data engineering experience Experience with data modeling, warehousing and building ETL pipelines Experience with SQL Preferred Qualifications Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2883904

Posted 3 weeks ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Gurugram

Work from Office

Key Responsibilities: Build and Drive the vision for the data platform and prepare a roadmap across Freecharge businesses. Working with business & product teams to understand and define the scope for data platform requirements and owning the requirements from data engineering product manager Helping in architecting and building real time and batch processing infrastructure for customer segmentation, hyper-personalization, AIML use cases etc. Conceptualizing scalable and efficient solutions for business and technical problems Demonstrating an advanced level of understanding of the technology stack, understanding the relevant product metrics and how your products interact with various businesses. Experience in proactive assessment of customer sentiments and taking appropriate measures using analytics and big data Exposure in working with with highly technical engineering team Defining and monitoring the KPIs for data platform being used across the organization for better customer experience and merchant sentiments Preferred Qualifications: 3-5 years as data platform product manager with experience in building data platforms or leading data architecture revamp programs Experience in working with cross functional teams spanning across product, growth, marketing, data analytics and data engineering teams. Ability to build consensus among stakeholders including the leadership team. Ability to grasp technical concepts quickly and communicate to a non-technical audience effectively. Strong execution capability paired with ability to envision and articulate a clear path to get there. B Tech/M Tech in Computer Science/ IT with experience in Spark, Hadoop, Scala, Java, segmentation, linear algebra etc

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Description Are you interested in innovating to deliver a world-class level of service to Amazon’s Selling Partners? At Amazon International Seller Services our mission is to make Sellers successful on Amazon. We are the team in Amazon which has a charter to work with sellers from every country. This provides a great opportunity to get diverse experience and a worldwide view. In this role you will be part of our mission to improve the Customer and Selling Partner experience across EU expansion marketplaces. You will partner with Business leaders, Product managers, BI manager to help drive specific business goals and be the analytical engine in specific areas. You will help identify, track and manage Key Performance Indicators (KPIs), partner with the internal and external teams to identify root-causes and automate iterative workflows using Amazon's tools. The ideal candidate should have a proven ability to independently deliver complex analytics projects. This role has high leadership visibility and requires efficient communication with tech & non-tech stakeholders. To be successful in this role, you should be comfortable dealing with large and complex data sets, have expertise in SQL querying, excel, have experience building self-service dashboards and using visualization tools, while always applying analytical rigor to solve business problems. Should have excellent judgment, be passionate about high standards (is never satisfied with the status quo), deliver innovative solutions. Key job responsibilities Collaborate with Leaders, multiple Account Managers, Team Managers etc. to understand business requirements and to prioritize and deliver data and reporting independently. Design, develop and maintain scalable, automated, user-friendly systems, reports, dashboards, etc. that will support our analytical and business needs. Analyze key metrics to uncover trends and root causes of issues and build simplified solutions for account managers/product managers to consume. Apply Statistical and Machine Learning methods to specific business problems and data. Utilizing code (Python, R, Scala, etc.) for analyzing data and building statistical models. Lead deep dives working backwards from business hypotheses and anecdotes & build visualizations and automation to reduce iterative manual efforts. Automate workflows using Amazon tools to improve sales teams productivity. Collaborate with other analysts to adopt best practices. Continually upskill in new technologies and adopt them in day to day work Basic Qualifications Bachelor's degree in a quantitative field (e.g., Computer Science, Mathematics, Statistics, Finance). 5+ years of data querying languages (e.g. SQL), scripting languages (e.g. Python) or statistical/mathematical software (e.g. R, SAS, Matlab, etc.) experience. Experience of working with SQL and at least one data visualization tool (e.g., PowerBI, Tableau, Amazon QuickSight) in a business environment. Preferred Qualifications Proficiency in Python Knowledge of Java, JSON and Amazon Technologies: AWS S3, RS, Lambda Experience of machine learning/statistical modeling data analysis tools and techniques, and parameters that affect their performance experience Experience as business analyst, data analyst or similar role Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2983323

Posted 3 weeks ago

Apply

2.0 - 5.0 years

5 - 8 Lacs

Gurugram

Work from Office

Programming Languages: Python, Scala Machine Learning frameworks: Scikit Learn, Xgboost, Tensorflow, Keras, PyTorch, Spacy, Gensim, Stanford NLP, NLTK, Open CV, Spark MLlib, . Machine Learning Algorithms experience good to have Scheduling experience: Airflow Big Data/ Streaming/ Queues: Apache Spark, Apache Nifi, Apache Kafka, RabbitMQ any one of them Databases: MySQL, Mongo/Redis/Dynamo DB, Hive Source Control: GIT Cloud: AWS Build and Deployment: Jenkins, Docker, Docker Swarm, Kubernetes. BI tool: Quicksight(preferred) else any BI tool (Must have)

Posted 3 weeks ago

Apply

5.0 - 10.0 years

10 - 16 Lacs

Hyderabad

Remote

Job description As an ETL Developer for the Data and Analytics team, at Guidewire you will participate and collaborate with our customers and SI Partners who are adopting our Guidewire Data Platform as the centerpiece of their data foundation. You will facilitate and be an active developer when necessary to operationalize the realization of the agreed upon ETL Architecture goals of our customers adhering to Guidewire best practices and standards. You will work with our customers, partners, and other Guidewire team members to deliver successful data transformation initiatives. You will utilize best practices for design, development, and delivery of customer projects. You will share knowledge with the wider Guidewire Data and Analytics team to enable predictable project outcomes and emerge as a leader in our thriving data practice. One of our principles is to have fun while we deliver, so this role will need to keep the delivery process fun and engaging for the team in collaboration with the broader organization. Given the dynamic nature of the work in the Data and Analytics team, we are looking for decisive, highly-skilled technical problem solvers who are self-motivated and take proactive actions for the benefit of our customers and ensure that they succeed in their journey to Guidewire Cloud Platform. You will collaborate closely with teams located around the world and adhere to our core values Integrity, Collegiality, and Rationality. Key Responsibilities: Build out technical processes from specifications provided in High Level Design and data specifications documents. Integrate test and validation processes and methods into every step of the development process Work with Lead Architects and provide inputs into defining user stories, scope, acceptance criteria and estimates. Systematic problem-solving approach, coupled with a sense of ownership and drive Ability to work independently in a fast-paced Agile environment Actively contribute to the knowledge base from every project you are assigned to. Qualifications: Bachelors or Masters Degree in Computer Science, or equivalent level of demonstrable professional competency, and 3 - 5 years + in a technical capacity building out complex ETL Data Integration frameworks. 3+ years of Experience with data processing and ETL (Extract, Transform, Load) and ELT (Extract, Load, and Transform) concepts. Experience with ADF or AWS Glue, Spark/Scala, GDP, CDC, ETL Data Integration, Experience working with relational and/or NoSQL databases Experience working with different cloud platforms (such as AWS, Azure, Snowflake, Google Cloud, etc.) Ability to work independently and within a team. Nice to have: Insurance industry experience Experience with ADF or AWS Glue Experience with the Azure data factory, Spark/Scala Experience with the Guidewire Data Platform.

Posted 3 weeks ago

Apply

5.0 years

35 - 45 Lacs

Bengaluru, Karnataka, India

On-site

This role is for one of Weekday's clients Salary range: Rs 3500000 - Rs 4500000 (ie INR 35-45 LPA) Min Experience: 5 years Location: Bangalore JobType: full-time Requirements Roles & Responsibilities Develop and extend our backend platform, processing terabytes of data to deliver unique, personalized financial experiences Collaborate directly with tech-focused founding team members and IIT graduates with expertise in designing scalable and robust system architectures Design systems from scratch with scalability and security front of mind Demonstrate deep knowledge of design patterns in Java, DS, and algorithms Monitor and optimize MySQL database queries for peak performance Experience with tools like Scala, Kafka, Bigtable, and BigQuery is beneficial but not mandatory Mentor junior team members by providing regular feedback and conducting code reviews

Posted 3 weeks ago

Apply

8.0 years

30 - 38 Lacs

Gurgaon

Remote

Role: AWS Data Engineer Location: Gurugram Mode: Hybrid Type: Permanent Job Description: We are seeking a talented and motivated Data Engineer with requisite years of hands-on experience to join our growing data team. The ideal candidate will have experience working with large datasets, building data pipelines, and utilizing AWS public cloud services to support the design, development, and maintenance of scalable data architectures. This is an excellent opportunity for individuals who are passionate about data engineering and cloud technologies and want to make an impact in a dynamic and innovative environment. Key Responsibilities: Data Pipeline Development: Design, develop, and optimize end-to-end data pipelines for extracting, transforming, and loading (ETL) large volumes of data from diverse sources into data warehouses or lakes. Cloud Infrastructure Management: Implement and manage data processing and storage solutions in AWS (Amazon Web Services) using services like S3, Redshift, Lambda, Glue, Kinesis, and others. Data Modeling: Collaborate with data scientists, analysts, and business stakeholders to define data requirements and design optimal data models for reporting and analysis. Performance Tuning & Optimization: Identify bottlenecks and optimize query performance, pipeline processes, and cloud resources to ensure cost-effective and scalable data workflows. Automation & Scripting: Develop automated data workflows and scripts to improve operational efficiency using Python, SQL, or other scripting languages. Collaboration & Documentation: Work closely with data analysts, data scientists, and other engineering teams to ensure data availability, integrity, and quality. Document processes, architectures, and solutions clearly. Data Quality & Governance: Ensure the accuracy, consistency, and completeness of data. Implement and maintain data governance policies to ensure compliance and security standards are met. Troubleshooting & Support: Provide ongoing support for data pipelines and troubleshoot issues related to data integration, performance, and system reliability. Qualifications: Essential Skills: Experience: 8+ years of professional experience as a Data Engineer, with a strong background in building and optimizing data pipelines and working with large-scale datasets. AWS Experience: Hands-on experience with AWS cloud services, particularly S3, Lambda, Glue, Redshift, RDS, and EC2. ETL Processes: Strong understanding of ETL concepts, tools, and frameworks. Experience with data integration, cleansing, and transformation. Programming Languages: Proficiency in Python, SQL, and other scripting languages (e.g., Bash, Scala, Java). Data Warehousing: Experience with relational and non-relational databases, including data warehousing solutions like AWS Redshift, Snowflake, or similar platforms. Data Modeling: Experience in designing data models, schema design, and data architecture for analytical systems. Version Control & CI/CD: Familiarity with version control tools (e.g., Git) and CI/CD pipelines. Problem-Solving: Strong troubleshooting skills, with an ability to optimize performance and resolve technical issues across the data pipeline. Desirable Skills: Big Data Technologies: Experience with Hadoop, Spark, or other big data technologies. Containerization & Orchestration: Knowledge of Docker, Kubernetes, or similar containerization/orchestration technologies. Data Security: Experience implementing security best practices in the cloud and managing data privacy requirements. Data Streaming: Familiarity with data streaming technologies such as AWS Kinesis or Apache Kafka. Business Intelligence Tools: Experience with BI tools (Tableau, Quicksight) for visualization and reporting. Agile Methodology: Familiarity with Agile development practices and tools (Jira, Trello, etc.) Job Type: Permanent Pay: ₹3,000,000.00 - ₹3,800,000.00 per year Benefits: Work from home Schedule: Day shift Monday to Friday Experience: AWS Glue Catalog : 3 years (Required) Data Engineering : 6 years (Required) AWS CDK, Cloud-formation, Lambda, Step-function : 3 years (Required) AWS Elastic MapReduce (EMR): 3 years (Required) Work Location: In person

Posted 3 weeks ago

Apply

3.0 years

6 - 10 Lacs

Gurgaon

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Analyze business requirements & functional specifications Be able to determine the impact of changes in current functionality of the system Interaction with diverse Business Partners and Technical Workgroups Be flexible to collaborate with onshore business, during US business hours Be flexible to support project releases, during US business hours Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Undergraduate degree or equivalent experience 3+ years of working experience in Python, Pyspark, Scala 3+ years of experience working on MS Sql Server and NoSQL DBs like Cassandra, etc. Hands-on working experience in Azure Databricks Solid healthcare domain knowledge Exposure to following DevOps methodology and creating CI/CD deployment pipeline Exposure to following Agile methodology specifically using tools like Rally Ability to understand the existing application codebase, perform impact analysis and update the code when required based on the business logic or for optimization Proven excellent analytical and communication skills (Both verbal and written) Preferred Qualification: Experience in the Streaming application (Kafka, Spark Streaming, etc.) At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone–of every race, gender, sexuality, age, location and income–deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. #Gen #NJP

Posted 3 weeks ago

Apply

0 years

0 Lacs

Andhra Pradesh, India

On-site

We are seeking a highly skilled and motivated Big Data Engineer to join our data engineering team. The ideal candidate will have hands-on experience with Hadoop ecosystem, Apache Spark, and programming expertise in Python (PySpark), Scala, and Java. You will be responsible for designing, developing, and optimizing scalable data pipelines and big data solutions to support analytics and business intelligence initiatives.

Posted 3 weeks ago

Apply

0 years

0 Lacs

Andhra Pradesh, India

On-site

Collaborate with Product Owners and stakeholders to understand the business requirements. Good Experience in Apache Kafka, Python, Tableau and MSBI-SSIS ,SSRS. Kafka integration with Python and data loading process. Analyse data from key source systems and design suitable solutions that transform the data from source to target. Provide support to the stakeholders and scrum team throughout the development lifecycle and respond to any design queries Support testing and implementation and review solutions to ensure functional and data assurance requirements are met Passionate about data and delivering high quality data-led solutions Able to influence stakeholders, build strong business relationships and communicate in a clear, concise manner Experience working with SQL or any big data technologies is a plus (Hadoop, Hive, Hbase, Scala, Spark etc) Good in Control-M, Git, CI & CD pipeline Good team player with a strong team ethos. Skills Required MSBI SSIS, MS SQL Server, Kafka, Airflow, ANSI SQL, ShellScript, Python, Scala, HDFS.

Posted 3 weeks ago

Apply

4.0 - 9.0 years

6 - 11 Lacs

Chennai

Work from Office

Job Summary Synechron is seeking an experienced Data Processing Engineer to lead the development of large-scale data processing solutions using Java, Apache Flink/Storm/Beam, and Google Cloud Platform (GCP). In this role, you will collaborate across teams to design, develop, and optimize data-intensive applications that support strategic business objectives. Your expertise will help evolve our data architecture, improve processing efficiency, and ensure the delivery of reliable, scalable solutions in an Agile environment. Software Requirements Required: Java (version 8 or higher) Apache Flink, Storm, or Beam for streaming data processing Google Cloud Platform (GCP) services, especially BigQuery and related data tools Experience with databases such as BigQuery, Oracle, or equivalent Familiarity with version control tools such as Git Preferred: Cloud deployment experience with GCP in particular Additional familiarity with containerization (Docker/Kubernetes) Knowledge of CI/CD pipelines and DevOps practices Overall Responsibilities Collaborate closely with cross-functional teams to understand data and system requirements, then design scalable solutions aligned with business needs. Develop detailed technical specifications, implementation plans, and documentation for new features and enhancements. Implement, test, and deploy data processing applications using Java and Apache Flink/Storm/Beam within GCP environments. Conduct code reviews to ensure quality, security, and maintainability, supporting team members' growth and best practices. Troubleshoot technical issues, resolve bottlenecks, and optimize application performance and resource utilization. Stay current with advancements in data processing, cloud technology, and Java development to continuously improve solutions. Support testing teams to verify data workflows and validation processes, ensuring reliability and accuracy. Participate in Agile ceremonies, including sprint planning, stand-ups, and retrospectives to ensure continuous delivery and process improvement. Technical Skills (By Category) Programming Languages: Required: Java (8+) Preferred: Python, Scala, or Node.js for scripting or auxiliary processing Databases/Data Management: Experience with BigQuery, Oracle, or similar relational data stores Cloud Technologies: GCP (BigQuery, Cloud Storage, Dataflow etc.) with hands-on experience in cloud data solutions Frameworks and Libraries: Apache Flink, Storm, or Beam for stream processing Java SDKs, APIs, and data integration libraries Development Tools and Methodologies: Git, Jenkins, JIRA, and Agile/Scrum practices Familiarity with containerization (Docker, Kubernetes) is a plus Security and Compliance: Understanding of data security principles in cloud environments Experience Requirements 4+ years of experience in software development, with a focus on data processing and Java-based backend development Proven experience working with Apache Flink, Storm, or Beam in production environments Strong background in managing large data workflows and pipeline optimization Experience with GCP data services and cloud-native development Demonstrated success in Agile projects, including collaboration with cross-functional teams Previous leadership or mentorship experience is a plus Day-to-Day Activities Design, develop, and deploy scalable data processing applications in Java using Flink/Storm/Beam on GCP Collaborate with data engineers, analysts, and architects to translate business needs into technical solutions Conduct code reviews, optimize data pipelines, and troubleshoot system issues swiftly Document technical specifications, data schemas, and process workflows Participate actively in Agile ceremonies, provide updates on task progress, and suggest process improvements Support continuous integration and deployment of data applications Mentor junior team members, sharing best practices and technical insights Qualifications Bachelors or Masters degree in Computer Science, Information Technology, or equivalent Relevant certifications in cloud technologies or data processing (preferred) Evidence of continuous professional development and staying current with industry trends Professional Competencies Strong analytical and problem-solving skills focused on data processing challenges Leadership abilities to guide, mentor, and develop team members Excellent communication skills for technical documentation and stakeholder engagement Adaptability to rapidly changing technologies and project priorities Capacity to prioritize tasks and manage time efficiently under tight deadlines Innovative mindset to leverage new tools and techniques for performance improvements

Posted 3 weeks ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Bengaluru, Karnataka

Work from Office

Data Governance The team will be central data governance team for Kotak bank managing Metadata platforms, Data Privacy, Data Security, Data Stewardship and Data Quality platform. If youve right data skills and are ready for building data lake solutions from scratch for high concurrency systems involving multiple systems then this is the team for you. You day to day role will include Drive business decisions with technical input and lead the team. Design, implement, and support an data infrastructure from scratch. Manage AWS resources, including EC2, EMR, S3, Glue, Redshift, and MWAA. Extract, transform, and load data from various sources using SQL and AWS big data technologies. Explore and learn the latest AWS technologies to enhance capabilities and efficiency. Collaborate with data scientists and BI engineers to adopt best practices in reporting and analysis. Improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Build data platforms, data pipelines, or data management and governance tools. BASIC QUALIFICATIONS for Data Engineer Bachelor's degree in Computer Science, Engineering, or a related field 3-5 years of experience in data engineering Strong understanding of AWS technologies, including S3, Redshift, Glue, and EMR Experience with data pipeline tools such as Airflow and Spark Experience with data modeling and data quality best practices Excellent problem-solving and analytical skills Strong communication and teamwork skills Experience in at least one modern scripting or programming language, such as Python, Java, or Scala Strong advanced SQL skills PREFERRED QUALIFICATIONS AWS cloud technologies: Redshift, S3, Glue, EMR, Kinesis, Firehose, Lambda, IAM, Airflow Prior experience in Indian Banking segment and/or Fintech is desired. Experience with Non-relational databases and data stores Building and operating highly available, distributed data processing systems for large datasets Professional software engineering and best practices for the full software development life cycle Designing, developing, and implementing different types of data warehousing layers Leading the design, implementation, and successful delivery of large-scale, critical, or complex data solutions Building scalable data infrastructure and understanding distributed systems concepts SQL, ETL, and data modelling Ensuring the accuracy and availability of data to customers Proficient in at least one scripting or programming language for handling large volume data processing Strong presentation and communications skills. For Managers, Customer centricity, obsession for customer Ability to manage stakeholders (product owners, business stakeholders, cross function teams) to coach agile ways of working. Ability to structure, organize teams, and streamline communication. Prior work experience to execute large scale Data Engineering projects

Posted 3 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Experience in SQL and understanding of ETL best practices Should have good hands on in ETL/Big Data development Extensive hands on experience in Scala Should have experience in Spark/Yarn, troubleshooting Spark, Linux, Python Setting up a Hadoop cluster, Backup, recovery, and maintenance.

Posted 3 weeks ago

Apply

2.0 - 6.0 years

3 - 7 Lacs

Gurugram

Work from Office

We are looking for a Pyspark Developer that loves solving complex problems across a full spectrum of technologies. You will help ensure our technological infrastructure operates seamlessly in support of our business objectives. Responsibilities Develop and maintain data pipelines implementing ETL processes. Take responsibility for Hadoop development and implementation. Work closely with a data science team implementing data analytic pipelines. Help define data governance policies and support data versioning processes. Maintain security and data privacy working closely with Data Protection Officer internally. Analyse a vast number of data stores and uncover insights. Skillset Required Ability to design, build and unit test the applications in Pyspark. Experience with Python development and Python data transformations. Experience with SQL scripting on one or more platforms Hive, Oracle, PostgreSQL, MySQL etc. In-depth knowledge of Hadoop, Spark, and similar frameworks. Strong knowledge of Data Management principles. Experience with normalizing/de-normalizing data structures, and developing tabular, dimensional and other data models. Have knowledge about YARN, cluster, executor, cluster configuration. Hands on working in different file formats like Json, parquet, csv etc. Experience with CLI on Linux-based platforms. Experience analysing current ETL/ELT processes, define and design new processes. Experience analysing business requirements in BI/Analytics context and designing data models to transform raw data into meaningful insights. Good to have knowledge on Data Visualization. Experience in processing large amounts of structured and unstructured data, including integrating data from multiple sources.

Posted 3 weeks ago

Apply

0 years

40 - 45 Lacs

Bengaluru, Karnataka, India

On-site

About Us We are on a mission to create India's largest fully automated financial inclusion organization, offering a range of financial services including micro-loans to serve the vast underserved middle/lower-income segment. Recognized as one of the Top 10 Google Launchpad-backed AI/ML Tech startups, you will experience firsthand challenges and opportunities to contribute towards building and scaling our business. Collaborate with brilliant minds driven by the goal of solving macro issues related to financial inclusion. Our services span over 17,000 pin codes in India, having positively impacted over 5.5 million users. Our user profile ranges from micro-entrepreneurs and small retailers to blue-grey-collar workers and salaried employees across various sectors. As part of our team, you'll manage Petabytes of data and contribute to organizational growth by deriving and applying data-driven insights, alongside opportunities to innovate and patent AI/ML technologies. What Can You Expect? Ownership of the company's success through ESOPs for high performers. Market-leading competitive salaries (in the 90th percentile). An open culture that encourages expressing opinions freely. Opportunities to learn from industry experts. A chance to positively impact billions of lives by enhancing financial inclusion. Be part of our journey to re-imagine solutions, delivering world-class, best-of-breed services to delight our customers and make a significant impact on the FinTech industry. Roles & Responsibilities Develop and extend our backend platform, processing terabytes of data to deliver unique, personalized financial experiences. Collaborate directly with tech-focused founding team members and IIT graduates with expertise in designing scalable and robust system architectures. Design systems from scratch with scalability and security front of mind. Demonstrate deep knowledge of design patterns in Java, DS, and algorithms. Monitor and optimize MySQL database queries for peak performance. Experience with tools like Scala, Kafka, Bigtable, and BigQuery is beneficial but not mandatory. Mentor junior team members by providing regular feedback and conducting code reviews. Skills: data structures,mysql,bigquery,bigtable,code,hld,ds,design patterns,kafka,scala,fintech,algorithms,java,architecture

Posted 3 weeks ago

Apply

1.0 - 3.0 years

9 - 13 Lacs

Pune

Work from Office

Overview We are hiring an Associate Data Engineer to support our core data pipeline development efforts and gain hands-on experience with industry-grade tools like PySpark, Databricks, and cloud-based data warehouses. The ideal candidate is curious, detail-oriented, and eager to learn from senior engineers while contributing to the development and operationalization of critical data workflows. Responsibilities Assist in the development and maintenance of ETL/ELT pipelines using PySpark and Databricks under senior guidance. Support data ingestion, validation, and transformation tasks across Rating Modernization and Regulatory programs. Collaborate with team members to gather requirements and document technical solutions. Perform unit testing, data quality checks , and process monitoring activities. Contribute to the creation of stored procedures, functions, and views . Support troubleshooting of pipeline errors and validation issues. Qualifications Bachelor’s degree in Computer Science, Engineering, or related discipline. 3+ years of experience in data engineering or internships in data/analytics teams. Working knowledge of Python, SQL , and ideally PySpark . Understanding of cloud data platforms (Databricks, BigQuery, Azure/GCP). Strong problem-solving skills and eagerness to learn distributed data processing. Good verbal and written communication skills. What we offer you Transparent compensation schemes and comprehensive employee benefits, tailored to your location, ensuring your financial security, health, and overall wellbeing. Flexible working arrangements, advanced technology, and collaborative workspaces. A culture of high performance and innovation where we experiment with new ideas and take responsibility for achieving results. A global network of talented colleagues, who inspire, support, and share their expertise to innovate and deliver for our clients. Global Orientation program to kickstart your journey, followed by access to our Learning@MSCI platform, LinkedIn Learning Pro and tailored learning opportunities for ongoing skills development. Multi-directional career paths that offer professional growth and development through new challenges, internal mobility and expanded roles. We actively nurture an environment that builds a sense of inclusion belonging and connection, including eight Employee Resource Groups. All Abilities, Asian Support Network, Black Leadership Network, Climate Action Network, Hola! MSCI, Pride & Allies, Women in Tech, and Women’s Leadership Forum. At MSCI we are passionate about what we do, and we are inspired by our purpose – to power better investment decisions. You’ll be part of an industry-leading network of creative, curious, and entrepreneurial pioneers. This is a space where you can challenge yourself, set new standards and perform beyond expectations for yourself, our clients, and our industry. MSCI is a leading provider of critical decision support tools and services for the global investment community. With over 50 years of expertise in research, data, and technology, we power better investment decisions by enabling clients to understand and analyze key drivers of risk and return and confidently build more effective portfolios. We create industry-leading research-enhanced solutions that clients use to gain insight into and improve transparency across the investment process. MSCI Inc. is an equal opportunity employer. It is the policy of the firm to ensure equal employment opportunity without discrimination or harassment on the basis of race, color, religion, creed, age, sex, gender, gender identity, sexual orientation, national origin, citizenship, disability, marital and civil partnership/union status, pregnancy (including unlawful discrimination on the basis of a legally protected parental leave), veteran status, or any other characteristic protected by law. MSCI is also committed to working with and providing reasonable accommodations to individuals with disabilities. If you are an individual with a disability and would like to request a reasonable accommodation for any part of the application process, please email Disability.Assistance@msci.com and indicate the specifics of the assistance needed. Please note, this e-mail is intended only for individuals who are requesting a reasonable workplace accommodation; it is not intended for other inquiries. To all recruitment agencies MSCI does not accept unsolicited CVs/Resumes. Please do not forward CVs/Resumes to any MSCI employee, location, or website. MSCI is not responsible for any fees related to unsolicited CVs/Resumes. Note on recruitment scams We are aware of recruitment scams where fraudsters impersonating MSCI personnel may try and elicit personal information from job seekers. Read our full note on careers.msci.com

Posted 3 weeks ago

Apply

4.0 - 9.0 years

4 - 9 Lacs

Gurugram

Work from Office

As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What you'll do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and storage solutions on the Databricks platform. - Collaborate with architects and other teams to migrate current solution to use Databricks. - Optimize performance and reliability of Databricks clusters and jobs to meet SLAs and business requirements. - Use best practices for data governance, security, and compliance on the Databricks platform. - Mentor junior engineers and provide technical guidance. - Stay current with emerging technologies and trends in data engineering and analytics to drive continuous improvement. You'll be expected to have : - Bachelor's or master's degree in computer science, Engineering, or a related field. - 5 to 8 years of overall experience and 2+ years of experience designing and implementing data solutions on the Databricks platform. - Proficiency in programming languages such as Python, Scala, or SQL. - Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark. - Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services. - Proven track record of delivering scalable and reliable data solutions in a fast-paced environment. - Excellent problem-solving skills and attention to detail. - Strong communication and collaboration skills with the ability to work effectively in cross-functional teams. - Good to have experience with containerization technologies such as Docker and Kubernetes. - Knowledge of DevOps practices for automated deployment and monitoring of data pipelines.

Posted 3 weeks ago

Apply

3.0 - 8.0 years

5 - 11 Lacs

Bengaluru

Work from Office

Work with product management and dev team to design, develop and deliver features and enhancements Collaborate closely with peers to develop clean code: readable, testable, high quality, performant, and secure Develop code using pair and team programming approaches Perform peer code reviews and walk-throughs Automate testing and deployment of software to enable delivering improvements to customers on a regular cadence Work closely with the agile team to innovate and improve everything Minimum Requirements: B.S. in Computer Science or equivalent is preferred 4+ years of experience with modern languages such as Java/C#/JavaScript/Scala Recent 2+ years of Scala functional programming in an Enterprise SW environment Experience with RESTful applications Experience with Microservices Ability to work effectively in adistributed, collaborative, agile environment and deliver solutions on a regular cadence

Posted 3 weeks ago

Apply

6.0 - 11.0 years

11 - 16 Lacs

Bengaluru

Work from Office

Job Title:Solution Architect Experience: 10-18 Years Location:Bangalore WFO Job Description : Engage with customer architecture team to understand customer product ecosystem. Provide solution architecture/technical design to achieve project goals. Expertise in architecture patterns and microservices architecture Key Technical skills Front-end UI - ELM Back-end API - Scala Play2 framework Typelevel ecosystem (Fs2, Cats Effect, Http4s) Database: Mongo DB Messaging: Kafka + Hermes Security/IdP: Okta Cloud/Deployment: AWS (primarily EC2), Kubernetes, Docker DevOps/CI-CD: JIRA, Github, Github actions, Jenkins, ArgoCD Ensures alignment of individual software projects with overall business strategy and technology standards. Collaborates closely with development teams to oversee solution implementation. Have a passion for software development, like to solve complex problems, and have a strong work ethic. Contribute to innovative ideas, technologies, and patterns. Stay updated with industry trends regarding best practices in front-end and back-end technologies. Minimum Requirements: 6+ years of experience in web development Experience in SLDC - designing, developing, testing, implementing, deploying, and maintaining software applications. Experience with a JS framework with state management (React, Angular, Vue, etc.) or ELM language. Proficiency in modern web techniques and standards of HTML, CSS, JavaScript, and design principles Experience with Java/Scala Experience with RESTful applications. Recent 3+ years of Scala functional programming in an enterprise SW environment Experience with frameworks such as Play or Akka Framework is a plus. Experience with GraphQL is a plus Proficiency in database management systems (SQL & NoSQL) such as MongoDB. Experience with Microservices architecture Experience in source control (Git), creating pull requests, and utilizing feature branching strategies.

Posted 3 weeks ago

Apply

5.0 - 10.0 years

15 - 25 Lacs

Hyderabad

Hybrid

Job Title: Big Data Engineer Experience: 5+ Years Location: Hyderabad-Hybrid Employment Type: Full-Time Job Summary: We are seeking a skilled Big Data Engineer with 5+ years of experience in building and managing scalable data pipelines and analytics solutions. The ideal candidate will have strong expertise in Big Data, Hadoop, Apache Spark, SQL, Hadoop, and Data Lake/Data Warehouse architectures. Experience working with any cloud platform (AWS, Azure, or GCP) is preferred. Required Skills: 5+ years of hands-on experience as a Big Data Engineer. Strong proficiency in Apache Spark (PySpark or Scala). Solid understanding and experience with SQL and database optimization. Experience with data lake or data warehouse environments and architecture patterns. Good understanding of data modeling, performance tuning, and partitioning strategies. Experience in working with large-scale distributed systems and batch/stream data processing. Preferred Qualifications: Experience with cloud platforms such as AWS, Azure, or GCP is preferred Education: Bachelors or Master's degree in Computer Science, Engineering, or a related field.

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Minimum qualifications: Bachelor's degree in Computer Science or related technical field or equivalent practical experience. 6 years of experience as a technical sales engineer in a cloud computing environment or a customer-facing role. Experience with Apache Spark and analytic warehouse solutions (e.g., Teradata, Netezza, Vertica, SQL-Server, and Big Data technologies). Experience implementing analytics systems architecture. Preferred qualifications: Master's degree in Computer Science or a related technical field. Experience with technical sales or professional consulting in cloud computing, data, information life-cycle management and Big Data. Experience in data warehousing, data lakes, batch/real-time processing and Extract, Transform, and Load (ETL) workflow including architecture design, implementing, tuning and schema design. Experience with coding languages like Python, JavaScript, C++, Scala, R, or Go. Knowledge of Linux, Web 2.0 development platforms, solutions, and related technologies like HTTP, Basic/NTLM,sessions, XML/XSLT/XHTML/HTML. Understanding of DNS, TCP, Firewalls, Proxy Servers, DMZ, Load Balancing, VPN, VPC. About The Job The Google Cloud Platform team helps customers transform and build what's next for their business — all with technology built in the cloud. Our products are developed for security, reliability and scalability, running the full stack from infrastructure to applications to devices and hardware. Our teams are dedicated to helping our customers — developers, small and large businesses, educational institutions and government agencies — see the benefits of our technology come to life. As part of an entrepreneurial team in this rapidly growing business, you will play a key role in understanding the needs of our customers and help shape the future of businesses of all sizes use technology to connect with customers, employees and partners. Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Support local sales teams in pursuing business opportunities by engaging customers to address data life-cycle aspects. Collaborate with business teams to identify business and technical requirements, conduct full technical discovery and architect client solutions. Lead technical projects, including technology advocacy, bid response support, product briefings, proof-of-concept work and co-ordinating technical resources. Leverage Google Cloud Platform products to demonstrate and prototype integrations in customer/partner environments.Travel for meetings, technical reviews, on-site delivery activities as needed. Deliver compelling product messaging to highlight the Google Cloud Platform value proposition through whiteboard and slide presentations, product demonstrations, white papers and Request For Information (RFI) response documents. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form .

Posted 3 weeks ago

Apply

5.0 - 10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

5-10 years of quality engineering experience and strong Software Automation Testing mindset. Strong programming skills in Java, Cypress/Python Proficiency with automation testing tools and/or other comparable technologies. Experience of creating UI Automation Frameworks and Test Suites using Selenium, Cucumber, Page Object Design Pattern, Page Factory, TestNG etc. Experience of creating API /Backend Automation Frameworks and Test Suites using Rest Assured, Junit, Mockito, Spring Boot, Cucumber , Kafka/ MQ, Wiremock, PostGres Cloud and CI/CD Experience - Knowledge of AWS, Azure, Docker & Kubernetes, Jenkins Strong experience of performing functional testing in different phases of software development life cycle and in CI/CD projects Proficient in creating/ maintaining Test Beds and Test Docs Knowledge of software development, software design, and overall system architecture Experience of leading a small team and exhibit leadership qualities Experience in testing scalable web services (REST or SOAP) and APIs in a microservices architecture. Experience in scripting language like Perl/Python/JavaScript/Ruby/Shell scripting. Experience in Performance testing using Scala/Jmeter Experience in Java Script based automation framework, example – Cypress

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies