Home
Jobs

2810 Scala Jobs - Page 34

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

Description Who we are: At Kenvue, part of the Johnson & Johnson Family of Companies, we believe there is extraordinary power in everyday care. Built on over a century of heritage and propelled forward by science, our iconic brands—including NEUTROGENA®, AVEENO®, TYLENOL®, LISTERINE®, JOHNSON’S® and BAND-AID® —are category leaders trusted by millions of consumers who use our products to improve their daily lives. Our employees share a digital-first mindset, an approach to innovation grounded in deep human insights, and a commitment to continually earning a place for our products in consumers’ hearts and homes. What will you do: The Senior Engineer Kubernetes is a hands-on engineer responsible for designing, implementing, and managing Cloud Native Kubernetes-based platform ecosystem and solutions for organization. This includes developing and implementing containerization strategies, developer workflows, designing and deploying Kubernetes platform, and ensuring high availability and scalability of Kubernetes infrastructure aligned with modern GitOps practices. Key Responsibilities: Implement platform capabilities and containerization plan using Kubernetes, Docker, service mesh and other modern containerization tools and technologies. Design and collaborate with other engineering stakeholders in developing architecture patterns and templates for application runtime platform such as K8s Cluster topology, traffic shaping, API, CI CD, and observability aligned with DevSecOps principles. Automate Kubernetes infrastructure deployment and management using tools such as Terraform, Jenkins, Crossplane to develop self-service platform workflows. Serve as member of micro-services platform team to closely work with Security and Compliance organization to define controls. Develop self-service platform capabilities focused on developer workflows such as API, service mesh, external DNS, cert management and K8s life cycle management in general. Participate in a cross-functional IT Architecture group discussion that reviews design from an enterprise cloud platform perspective. Optimize Kubernetes platform infrastructure for high availability and scalability. What we are looking for Qualifications Bachelor’s Degree required, preferably in STEM field. 5+ years of progressive experience in a combination of development, design in areas of cloud computing. 3+ years of experience in developing cloud native platform capabilities based of Kubernetes (Preferred EKS and/or AKS). Strong Infrastructure as a Code (IaC) experience on public Cloud (AWS and/or Azure) Experience in working on a large scale, highly available, cloud native, multi-tenant, infrastructure platforms on public cloud, preferably in a consumer business Expertise in building platform using tools like Kubernetes, Istio, OpenShift, Linux, Helm, Terraform, CI/CD. Experience in working high scale, critically important products running across public clouds (AWS, Azure) and private data centers is a plus. Strong hand-on development experience with one or more of the following languages: Go, Scala, Java, Ruby, Python Prior experience on working in a team involved in re-architecting and migrating monolith applications to microservices will be a plus Prior experience of Observability, through tools such as Prometheus, Elasticsearch, Grafana, DataDog or Zipkin is a plus. Must have a solid understanding of Continuous Development and Deployment in AWS and/or Azure. Understanding of basic Linux kernel and window server operating system Experience in working with bash, PowerShell scripting. Must be results-driven, a quick learner, and a self-starter Cloud engineering experience is a plus. Qualifications Must be results-driven, a quick learner, and a self-starter Cloud engineering experience is a plus. Primary Location Asia Pacific-India-Karnataka-Bangalore Job Function Operations (IT)

Posted 1 week ago

Apply

2.0 - 3.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

Description The Data Engineer supports, develops, and maintains a data and analytics platform to efficiently process, store, and make data available to analysts and other consumers. This role collaborates with Business and IT teams to understand requirements and best leverage technologies for agile data delivery at scale. Note:- Even though the role is categorized as Remote, it will follow a hybrid work model. Key Responsibilities Implement and automate deployment of distributed systems for ingesting and transforming data from various sources (relational, event-based, unstructured). Develop and operate large-scale data storage and processing solutions using cloud-based platforms (e.g., Data Lakes, Hadoop, HBase, Cassandra, MongoDB, DynamoDB). Ensure data quality and integrity through continuous monitoring and troubleshooting. Implement data governance processes, managing metadata, access, and data retention. Develop scalable, efficient, and quality data pipelines with monitoring and alert mechanisms. Design and implement physical data models and storage architectures based on best practices. Analyze complex data elements and systems, data flow, dependencies, and relationships to contribute to conceptual, physical, and logical data models. Participate in testing and troubleshooting of data pipelines. Utilize agile development technologies such as DevOps, Scrum, and Kanban for continuous improvement in data-driven applications. Responsibilities Qualifications, Skills, and Experience: Must-Have 2-3 years of experience in data engineering with expertise in Azure Databricks and Scala/Python. Hands-on experience with Spark (Scala/PySpark) and SQL. Strong understanding of SPARK Streaming, SPARK Internals, and Query Optimization. Proficiency in Azure Cloud Services. Agile Development experience. Experience in Unit Testing of ETL pipelines. Expertise in creating ETL pipelines integrating ML models. Knowledge of Big Data storage strategies (optimization and performance). Strong problem-solving skills. Basic understanding of Data Models (SQL/NoSQL) including Delta Lake or Lakehouse. Exposure to Agile software development methodologies. Quick learner with adaptability to new technologies. Nice-to-Have Understanding of the ML lifecycle. Exposure to Big Data open-source technologies. Experience with clustered compute cloud-based implementations. Familiarity with developing applications requiring large file movement in cloud environments. Experience in building analytical solutions. Exposure to IoT technology. Competencies System Requirements Engineering: Translates stakeholder needs into verifiable requirements. Collaborates: Builds partnerships and works collaboratively with others. Communicates Effectively: Develops and delivers clear communications for various audiences. Customer Focus: Builds strong customer relationships and delivers customer-centric solutions. Decision Quality: Makes timely and informed decisions to drive progress. Data Extraction: Performs ETL activities from various sources using appropriate tools and technologies. Programming: Writes and tests computer code using industry standards, tools, and automation. Quality Assurance Metrics: Applies measurement science to assess solution effectiveness. Solution Documentation: Documents and communicates solutions to enable knowledge transfer. Solution Validation Testing: Ensures configuration changes meet design and customer requirements. Data Quality: Identifies and corrects data flaws to support governance and decision-making. Problem Solving: Uses systematic analysis to identify and resolve issues effectively. Values Differences: Recognizes and values diverse perspectives and cultures. Qualifications Education, Licenses, and Certifications: College, university, or equivalent degree in a relevant technical discipline, or equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Work Schedule Work primarily with stakeholders in the US, requiring a 2-3 hour overlap during EST hours as needed. Job Systems/Information Technology Organization Cummins Inc. Role Category Remote Job Type Exempt - Experienced ReqID 2411641 Relocation Package No Show more Show less

Posted 1 week ago

Apply

8.0 - 13.0 years

20 - 35 Lacs

Hyderabad

Remote

Naukri logo

Databricks Administrator Azure/AWS | Remote | 6+ Years Job Description: We are seeking an experienced Databricks Administrator with 6+ years of expertise in managing and optimizing Databricks environments. The ideal candidate should have hands-on experience with Azure/AWS Databricks , cluster management, security configurations, and performance optimization. This role requires close collaboration with data engineering and analytics teams to ensure smooth operations and scalability. Key Responsibilities: Deploy, configure, and manage Databricks workspaces, clusters, and jobs . Monitor and optimize Databricks performance, auto-scaling, and cost management . Implement security best practices , including role-based access control (RBAC) and encryption. Manage Databricks integration with cloud storage (Azure Data Lake, S3, etc.) and other data services . Automate infrastructure provisioning and management using Terraform, ARM templates, or CloudFormation . Troubleshoot Databricks runtime issues, job failures, and performance bottlenecks . Support CI/CD pipelines for Databricks workloads and notebooks. Collaborate with data engineering teams to enhance ETL pipelines and data processing workflows . Ensure compliance with data governance policies and regulatory requirements . Maintain and upgrade Databricks versions and libraries as needed. Required Skills & Qualifications: 6+ years of experience as a Databricks Administrator or in a similar role. Strong knowledge of Azure/AWS Databricks and cloud computing platforms . Hands-on experience with Databricks clusters, notebooks, libraries, and job scheduling . Expertise in Spark optimization, data caching, and performance tuning . Proficiency in Python, Scala, or SQL for data processing. Experience with Terraform, ARM templates, or CloudFormation for infrastructure automation. Familiarity with Git, DevOps, and CI/CD pipelines . Strong problem-solving skills and ability to troubleshoot Databricks-related issues. Excellent communication and stakeholder management skills. Preferred Qualifications: Databricks certifications (e.g., Databricks Certified Associate/Professional). Experience in Delta Lake, Unity Catalog, and MLflow . Knowledge of Kubernetes, Docker, and containerized workloads . Experience with big data ecosystems (Hadoop, Apache Airflow, Kafka, etc.). Email : Hrushikesh.akkala@numerictech.com Phone /Whatsapp : 9700111702 For immediate response and further opportunities, connect with me on LinkedIn: https://www.linkedin.com/in/hrushikesh-a-74a32126a/

Posted 1 week ago

Apply

4.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Location: Gurugram, India (On-site/Hybrid) Type: Full-Time | 4+ Years Experience | AI, Architecture & Product Engineering Hubnex Labs is seeking a visionary and hands-on Full Stack Data Science Architect to lead the development of scalable AI products and reusable intellectual property (IP) that power data-driven solutions across global enterprise clients. This role requires deep technical expertise in AI/ML, data architecture, backend/frontend systems, and cloud-native technologies. Key Responsibilities AI & Data Science Leadership Lead design and development of end-to-end AI/ML solutions across enterprise applications Architect data pipelines, model training, validation, and deployment workflows Apply cutting-edge techniques in NLP, Computer Vision, Speech Recognition, Reinforcement Learning, etc. Evaluate and rank algorithms based on business impact, accuracy, and scalability Design and optimize data augmentation, preprocessing, and feature engineering pipelines Train, validate, and fine-tune models using state-of-the-art tools and strategies Monitor and improve model performance post-deployment Full Stack & Cloud Architecture Design and implement cloud-native systems using microservices, serverless, and event-driven architectures Build robust APIs and UIs for intelligent applications (using Python, Node.js, React, etc.) Use Docker, Kubernetes, and CI/CD pipelines for scalable deployment Leverage technologies like Kafka, TensorFlow, Elixir, Golang, and NoSQL/Graph DBs for high-performance ML products Define infrastructure to meet latency and throughput goals for ML systems in production Innovation & Productization Build reusable IP that can be adapted across industries and clients Rapidly prototype AI features and user-facing applications for demos and validation Collaborate closely with product managers and business stakeholders to translate use cases into scalable tech Explore and adopt new technologies and frameworks to maintain a forward-looking tech stack Required Skills & Experience 4+ years of experience building and deploying AI/ML models and scalable software systems Strong understanding of ML frameworks (TensorFlow, Keras, PyTorch), data libraries (pandas, NumPy), and model tuning Proven track record of working with large-scale data, data cleaning, and visualization Expertise in Python, and experience with at least one other language (Go, Java, Scala, etc.) Experience with front-end frameworks (React, Vue, or Angular) is a plus Proficient in DevOps practices, CI/CD, and cloud platforms (AWS/GCP/Azure) Familiarity with event-driven systems, real-time protocols (WebSockets, MQTT), and container orchestration Hands-on experience with NoSQL databases, data lakes, or distributed data platforms Preferred Traits Experience leading agile engineering teams and mentoring junior developers Strong architectural thinking, with an eye on scalability, maintainability, and performance Entrepreneurial mindset with a focus on building reusable components and IP Excellent communication skills, capable of bridging business and technical conversations Why Join Hubnex Labs? Own and architect impactful AI products used across industries Shape the data science foundation of a fast-scaling software consulting powerhouse Enjoy a creative, high-performance environment in Gurugram, with flexibility and long-term growth opportunities Contribute to next-gen solutions in AI, cloud, and digital transformation Skills: gcp,node.js,ci/cd,data architecture,reinforcement learning,cloud-native technologies,mqtt,architecture,kubernetes,numpy,devops,cloud,data,pytorch,golang,keras,speech recognition,azure,data science,nlp,kafka,tensorflow,aws,pandas,docker,ai/ml,nosql,react,python,design,computer vision,websockets,graph dbs,elixir Show more Show less

Posted 1 week ago

Apply

0.0 - 4.0 years

3 - 7 Lacs

Pune

Work from Office

Naukri logo

Wipro Limited (NYSEWIT, BSE507685, NSEWIPRO) is a leading technology services and consulting company focused on building innovative solutions that address clients’ most complex digital transformation needs. Leveraging our holistic portfolio of capabilities in consulting, design, engineering, and operations, we help clients realize their boldest ambitions and build future-ready, sustainable businesses. With over 230,000 employees and business partners across 65 countries, we deliver on the promise of helping our customers, colleagues, and communities thrive in an ever-changing world. For additional information, visit us at www.wipro.com. About The Role Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. ? Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLA’s defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements ? Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers’ and clients’ business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLA’s ? Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks ? Deliver NoPerformance ParameterMeasure1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT2Team ManagementProductivity, efficiency, absenteeism3Capability developmentTriages completed, Technical Test performance Mandatory Skills: Scala programming. Experience5-8 Years. Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.

Posted 1 week ago

Apply

5.0 - 8.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

Wipro Limited (NYSEWIT, BSE507685, NSEWIPRO) is a leading technology services and consulting company focused on building innovative solutions that address clients’ most complex digital transformation needs. Leveraging our holistic portfolio of capabilities in consulting, design, engineering, and operations, we help clients realize their boldest ambitions and build future-ready, sustainable businesses. With over 230,000 employees and business partners across 65 countries, we deliver on the promise of helping our customers, colleagues, and communities thrive in an ever-changing world. For additional information, visit us at www.wipro.com. About The Role _x000D_ Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. ? _x000D_ Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLA’s defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements ? _x000D_ Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers’ and clients’ business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLA’s ? _x000D_ Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks ? _x000D_ Deliver NoPerformance ParameterMeasure1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT2Team ManagementProductivity, efficiency, absenteeism3Capability developmentTriages completed, Technical Test performance Mandatory Skills: Hadoop_x000D_. Experience5-8 Years_x000D_. Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.

Posted 1 week ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Bengaluru

Remote

Naukri logo

Job Requirement for Offshore Data Engineer (with ML expertise) Work Mode: Remote Base Location: Bengaluru Experience: 5+ Years Technical Skills & Expertise: PySpark & Apache Spark: Extensive experience with PySpark and Spark for big data processing and transformation. Strong understanding of Spark architecture, optimization techniques, and performance tuning. Ability to work with Spark jobs in distributed computing environments like Databricks. Data Mining & Transformation: Hands-on experience in designing and implementing data mining workflows. Expertise in data transformation processes, including ETL (Extract, Transform, Load) pipelines. Experience in large-scale data ingestion, aggregation, and cleaning. Programming Languages: Python & Scala: Proficient in Python for data engineering tasks, including using libraries like Pandas and NumPy. Scala proficiency is preferred for Spark job development. Big Data Concepts: In-depth knowledge of big data frameworks and paradigms, such as distributed file systems, parallel computing, and data partitioning. Big Data Technologies: Cassandra & Hadoop: Experience with NoSQL databases like Cassandra and distributed storage systems like Hadoop. Data Warehousing Tools: Proficiency with Hive for data warehousing solutions and querying. ETL Tools: Experience with Beam architecture and other ETL tools for large-scale data workflows. Cloud Technologies (GCP): Expertise in Google Cloud Platform (GCP), including core services like Cloud Storage, BigQuery, and DataFlow. Experience with DataFlow jobs for batch and stream processing. Familiarity with managing workflows using Airflow for task scheduling and orchestration in GCP. Machine Learning & AI: GenAI Experience: Familiarity with Generative AI and its applications in ML pipelines. ML Model Development: Knowledge of basic ML model building using tools like Pandas, NumPy, and visualization with Matplotlib. ML Ops Pipeline: Experience in managing end-to-end ML Ops pipelines for deploying models in production, particularly LLM (Large Language Models) deployments. RAG Architecture: Understanding and experience in building pipelines using Retrieval-Augmented Generation (RAG) architecture to enhance model performance and output. Tech stack : Spark, Pyspark, Python, Scala, GCP data flow, Data composer (Air flow), ETL, Databricks, Hadoop, Hive, GenAI, ML Modeling basic knowledge, ML Ops experience , LLM deployment, RAG

Posted 1 week ago

Apply

5.0 - 10.0 years

20 - 35 Lacs

Bengaluru

Remote

Naukri logo

Job Title: Senior Machine Learning Engineer Work Mode: Remote Base Location: Bengaluru Experience: 5+ Years Strong problem-solving skills and ability to work in a fast-paced, collaborative environment. Strong programming skills in Python and experience with ML frameworks. Proficiency in containerization (Docker) and orchestration (Kubernetes) technologies. Solid understanding of CI/CD principles and tools (e.g., Jenkins, GitLab CI, GitHub Actions). Knowledge of data engineering concepts and experience building data pipelines. Strong understandings on Computational, Storage and Orchestration resources on cloud platforms. Deploying and managing ML models especially on GCP (cloud platform agnostic though) services such as Cloud Run, Cloud Functions, and Vertex AI. Implementing MLOps best practices, including model version tracking, governance, and monitoring for performance degradation and drift. Creating and using benchmarks, metrics, and monitoring to measure and improve services Collaborating with data scientists and engineers to integrate ML workflows from onboarding to decommissioning. Experience with MLOps tools like Kubeflow, MLflow, and Data Version Control (DVC). Manage ML models on any of the following: AWS (SageMaker), Azure (Machine Learning), and GCP (Vertex AI). Tech Stack : Aws or GCP or Azure Experience. (More GCP Specific) must have done Py spark, Databricks is good. ML Experience, Docker and Kubernetes.

Posted 1 week ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Greenlight is the leading family fintech company on a mission to help parents raise financially smart kids. We proudly serve more than 6 million parents and kids with our award-winning banking app for families. With Greenlight, parents can automate allowance, manage chores, set flexible spend controls, and invest for their family’s future. Kids and teens learn to earn, save, spend wisely, and invest. At Greenlight, we believe every child should have the opportunity to become financially healthy and happy. It’s no small task, and that’s why we leap out of bed every morning to come to work. Because creating a better, brighter future for the next generation depends on it. We are looking for a Software Engineer II with a background in building scalable systems and services to join our Platform Services (Safety team). Our ideal candidate understands application design and development but also wants to participate in the deployment and operational phases of the SDLC. In this role, you will be working primarily with Java & Kotlin. This role reports to the Engineering Manager. Technologies we use: Backend: Java / Kotlin REST AWS MySQL, DynamoDB, Redis Kubernetes, Ambassador, Helm, Rancher What you will be doing: Building and supporting microservices in Java/Kotlin that support our core product Working with the Cloud Infrastructure team to deploy your services in Kubernetes on AWS Continuously evaluate and improve your code quality and the reliability and availability of your teams’ services through metrics, monitoring, testing Working with REST APIs Building, supporting, and operating your services for use within Greenlight engineering and for our partners Improve engineering tooling, process, and standards to enable faster, more consistent, more reliable, and highly repeatable application delivery What you should bring: 4+ years of software development experience Bachelor's Degree in Computer Science or equivalent Experience with languages on the JVM (Kotlin, Java, Scala, etc.) Experience with large-scale performant applications using cloud architecture and services - AWS and Kubernetes highly preferred Experience building quality software and writing your own unit tests A collaborative, positive, inclusive and team-oriented attitude A desire to learn and master new technologies Who we are: It takes a special team to aim for a never-been-done-before mission like ours. We’re looking for people who love working together because they know it makes us stronger, people who look to others and ask, “How can I help?” and then “How can we make this even better?” If you’re ready to roll up your sleeves and help parents raise a financially smart generation, apply to join our team. Greenlight is an equal opportunity employer and will not discriminate against any employee or applicant based on age, race, color, national origin, gender, gender identity or expression, sexual orientation, religion, physical or mental disability, medical condition (including pregnancy, childbirth, or a medical condition related to pregnancy or childbirth), genetic information, marital status, veteran status, or any other characteristic protected by federal, state or local law. Greenlight is committed to an inclusive work environment and interview experience. If you require reasonable accommodations to participate in our hiring process, please reach out to your recruiter directly or email recruiting@greenlight.me. Show more Show less

Posted 1 week ago

Apply

15.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Purpose Over 15 years, we have become a premier global provider of multi-cloud management, cloud-native application development solutions, and strategic end-to-end digital transformation services. Headquartered in Canada and with regional headquarters in the U.S. and the United Kingdom, Centrilogic delivers smart, streamlined solutions to clients worldwide. We are looking for a passionate and experienced Data Engineer to work with our other 70 Software, Data and DevOps engineers to guide and assist our clients’ data modernization journey. Our team works with companies with ambitious missions - clients who are creating new, innovative products, often in uncharted markets. We work as embedded members and leaders of our clients' development and data teams. We bring experienced senior engineers, leading-edge technologies and mindsets, and creative thinking. We show our clients how to move to the modern frameworks of data infrastructures and processing, and we help them reach their full potential with the power of data. In this role, you'll be the day-to-day primary point of contact with our clients to modernize their data infrastructures, architecture, and pipelines. Principal Responsibilities Consulting clients on cloud-first strategies for core bet-the-company data initiatives Providing thought leadership on both process and technical matters Becoming a real champion and trusted advisor to our clients on all facets of Data Engineering Designing, developing, deploying, and supporting the modernization and transformation of our client’s end-to-end data strategy, including infrastructure, collection, transmission, processing, and analytics Mentoring and educating clients’ teams to keep them up to speed with the latest approaches, tools and skills, and setting them up for continued success post-delivery Required Experience And Skills Must have either Microsoft Certified Azure Data Engineer Associate or Fabric Data Engineer Associate certification. Must have experience working in a consulting or contracting capacity on large data management and modernization programs. Experience with SQL Servers, data engineering, on platforms such as Azure Data Factory, Databricks, Data Lake, and Synapse. Strong knowledge and demonstrated experience with Delta Lake and Lakehouse Architecture. Strong knowledge of securing Azure environment, such as RBAC, Key Vault, and Azure Security Center. Strong knowledge of Kafka and Spark and extensive experience using them in a production environment. Strong and demonstrable experience as DBA in large-scale MS SQL environments deployed in Azure. Strong problem-solving skills, with the ability to get to the route of an issue quickly. Strong knowledge of Scala or Python. Strong knowledge of Linux administration and networking. Scripting skills and Infrastructure as Code (IaC) experience using PowerShell, Bash, and ARM templates. Understanding of security and corporate governance issues related with cloud-first data architecture, as well as accepted industry solutions. Experience in enabling continuous delivery for development teams using scripted cloud provisioning and automated tooling. Experience working with Agile development methodology that is fit for purpose. Sound business judgment and demonstrated leadership Show more Show less

Posted 1 week ago

Apply

5.0 - 10.0 years

6 - 16 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Naukri logo

Role & responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to ensure effective Design, Development, Validation and Support activities, to assure that our clients are satisfied with the high levels of service in the technology domain. You will gather the requirements and specifications to understand the client requirements in a detailed manner and translate the same into system requirements. You will play a key role in the overall estimation of work requirements to provide the right information on project estimations to Technology Leads and Project Managers. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Primary skills:Technology->Functional Programming->Scala Additional information(Optional) Knowledge of design principles and fundamentals of architecture Understanding of performance engineering Knowledge of quality processes and estimation techniques Basic understanding of project domain Ability to translate functional / nonfunctional requirements to systems requirements Ability to design and code complex programs Ability to write test cases and scenarios based on the specifications Good understanding of SDLC and agile methodologies Awareness of latest technologies and trends Logical thinking and problem solving skills along with an ability to collaborate Preferred candidate profile

Posted 1 week ago

Apply

2.0 - 7.0 years

4 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

We are looking for highly skilled and motivated Software Engineers with a strong technical background and a natural curiosity and interest in solving problems efficiently and elegantly. The ideal candidate will have experience in developing application and/or server-side software, and a passion for building scalable, high-performance solutions. Key Responsibilities Design, develop, and maintain software applications and server-side systems. Write clean, efficient, and maintainable code following best practices. Collaborate with cross-functional teams to define and implement new features. Optimize and improve existing code for performance, reliability, and scalability. Troubleshoot and debug issues across the software stack. Stay up to date with emerging technologies and best practices in software development. Participate in code reviews and contribute to a culture of continuous improvement. Requirements Bachelor s or Master s degree in Computer Science, Engineering, or a related field. 2+ years of experience in software development, with a focus on application or server-side development. Strong proficiency in one or more programming languages such as Go, Scala, or JavaScript. Knowledge of database systems (SQL and NoSQL) and experience designing efficient data models. Strong understanding of algorithms, data structures, and system design. Preferred Qualifications Experience with microservices architecture and distributed systems. Knowledge of DevOps practices, CI/CD pipelines, containerisation, and automated testing. Familiarity with front-end technologies such as React or Svelte. Anoki AI is a pioneering AI company revolutionizing the world of connected TV (CTV), from content discovery to advertising and engagement. Anoki AI empowers content partners, CTV platforms, and advertisers to connect with their target audiences with unparalleled precision for maximum impact. Our suite of innovative solutions - ContextIQ (AI-powered contextual CTV advertising), Live TVx (AI-enhanced native FAST service), and AdMagic (GenAI for video ad creation and personalization) - harnesses the power of cutting-edge AI to deliver hyper-personalized viewing experiences that seamlessly integrate high-quality content and contextually relevant and dynamically customized ads that resonate deeply with viewers.

Posted 1 week ago

Apply

7.0 - 12.0 years

15 - 25 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Naukri logo

We are seeking a highly motivated and technically proficient Big Data Engineer to join our innovative team and contribute to the development of a next-generation Big Data platform. This is an exciting opportunity to work on cutting-edge solutions handling petabyte-scale datasets for one of the worlds largest technology companies headquartered in Silicon Valley . Key Responsibilities Design and develop scalable Big Data analytical applications. Build and optimize complex ETL pipelines and data processing frameworks. Implement large-scale, near real-time streaming data processing systems. Continuously enhance and support the project’s codebase, CI/CD pipelines, and deployment infrastructure. Collaborate with a team of top-tier engineers to build highly performant and resilient data systems using the latest Big Data technologies. Required Qualifications Strong programming skills in Scala , Java , or Python (Scala preferred). Hands-on experience with Apache Spark , Hadoop , and Hive . Proficiency in stream processing technologies such as Kafka , Spark Streaming , or Akka Streams . Solid understanding of data quality, validation, and quality engineering practices. Experience with Git and version control best practices. Ability to rapidly learn and apply new tools, frameworks, and technologies. Preferred Qualifications Strong experience with AWS Cloud services (e.g., EMR, S3, Lambda, Glue, Redshift). Familiarity with Unix-based operating systems and shell scripting (bash, ssh, grep, etc.). Experience with GitHub-based development workflows and pull request processes. Knowledge of JVM-based build tools such as SBT , Maven , or Gradle . What We Offer Opportunity to work on bleeding-edge Big Data projects with global impact. A collaborative and intellectually stimulating environment. Competitive compensation and performance-based incentives. Flexible work schedule. Comprehensive benefits package including medical insurance and fitness programs. Regular corporate social events and team-building activities. Ongoing professional growth and career development opportunities. Access to a modern and well-equipped office space.

Posted 1 week ago

Apply

3.0 - 6.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

Skills: Microsoft Azure, Hadoop, Spark, Databricks, Airflow, Kafka, Py spark RequirmentsExperience working with distributed technology tools for developing Batch and Streaming pipelines using. SQL, Spark, Python Airflow Scala Kafka Experience in Cloud Computing, e.g., AWS, GCP, Azure, etc. Able to quickly pick up new programming languages, technologies, and frameworks. Strong skills building positive relationships across Product and Engineering. Able to influence and communicate effectively, both verbally and written, with team members and business stakeholders Experience with creating/ configuring Jenkins pipeline for smooth CI/CD process for Managed Spark jobs, build Docker images, etc. Working knowledge of Data warehousing, Data modelling, Governance and Data Architecture Experience working with Data platforms, including EMR, Airflow, Data bricks (Data Engineering & Delta Lake components) Experience working in Agile and Scrum development process. Experience in EMR/ EC2, Data bricks etc. Experience working with Data warehousing tools, including SQL database, Presto, and Snowflake Experience architecting data product in Streaming, Server less and Microservices Architecture and platform.

Posted 1 week ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About Company provides companies with innovative technology solutions for everyday business problems. Our passion is to help clients become intelligent, information-driven organizations, where fact-based decision-making is embedded into daily operations, which leads to better processes and outcomes Experience: 7 to 10 Years Qualifications Must Have: Experience in handling team members Proficiency in working with cloud platforms (AWS, Azure, GCP) Experience in SQL, NoSQL, and Data Modelling Experience in Python programming Experience in Design, Development, and Deployment of Data Architecture Experience 8+ years of experience in data engineering with hands-on expertise in data pipeline development, architecture, and system optimization Demonstrated success in managing global teams, especially across US and India time zones. Proven track record in leading data engineering teams and managing end-to-end project delivery. Strong background in data warehousing and familiarity with tools such as Matillion, dbt, Striim, etc. Hands-on experience of 2+ years in designing and developing data integration solutions using Matillion and/or dbt Expertise in architecting data solutions. Successfully implemented at least two end-to-end projects with multiple transformation layers. Technical Skills Lead the design, development, and deployment of scalable data architectures, pipelines, and processes tailored to client needs Expertise in programming languages such as Python, Scala, or Java. Proficiency in designing and delivering data pipelines in Cloud Data Warehouses (e.g., Snowflake, Redshift), using various ETL/ELT tools such as Matillion, dbt, Striim, etc. Solid understanding of database systems (relational and NoSQL) and data modeling techniques. Strong knowledge of data engineering and integration frameworks. Good grasp of coding standards, with the ability to define standards and testing strategies for projects. Proficiency in working with cloud platforms (AWS, Azure, GCP) and associated data services. Enthusiastic about working in Agile methodology. Possess a comprehensive understanding of the DevOps process including GitHub integration and CI/CD pipelines. Skills: sql,data integration,matillion,redshift,nosql,java,data architecture design,devops,aws,data warehousing,azure,architecture,data pipeline development,agile methodology,pipelines,etl/elt tools,ci/cd,dbt,data engineering,integration,team management,scala,data modeling,striim,github,cloud platforms (aws, azure, gcp),snowflake,python Show more Show less

Posted 1 week ago

Apply

7.0 - 12.0 years

20 - 35 Lacs

Pune

Work from Office

Naukri logo

Job Title: Data Engineer Informatica IDMC Location: Remote/Contract Experience Level: Mid to Senior (7+ years) Job Summary: We are seeking a highly skilled Data Engineer with a minimum of 5 years of hands-on experience in Informatica Intelligent Data Management Cloud (IDMC) . The successful candidate will design, implement, and maintain scalable data integration solutions using Informatica Cloud services. Experience with CI/CD pipelines is required to ensure efficient development and deployment cycles. Familiarity with Informatica Catalog , Data Governance , and Data Quality or Azure Data Factory is considered a strong advantage. Key Responsibilities: Design, build, and optimize end-to-end data pipelines using Informatica IDMC , including Cloud Data Integration and Cloud Application Integration. Implement ETL/ELT processes to support data lakehouse, and EDW use cases. Develop and maintain CI/CD pipelines to support automated deployment and version control. Work closely with data architects, analysts, and business stakeholders to translate data requirements into scalable solutions. Monitor job performance, troubleshoot issues, and ensure compliance with SLAs and data quality standards. Document technical designs, workflows, and integration processes following best practices. Collaborate with DevOps and cloud engineering teams to ensure seamless integration within the cloud ecosystem. Required Qualifications: Bachelor’s degree in Computer Science, Information Systems, Engineering, or a related field. Minimum 5 years of hands-on experience with Informatica IDMC . Experience in building and deploying CI/CD pipelines using tools such as Git, or Azure DevOps. Proficient in SQL , data modeling, and transformation logic. Experience with cloud platforms (Azure or OCI). Strong problem-solving skills in data operations and pipeline performance. Preferred / Nice-to-Have Skills: Experience with Informatica Data Catalog for metadata and lineage tracking. Familiarity with Informatica Data Governance tools such as Axon and Business Glossary. Hands-on experience with Informatica Data Quality (IDQ) for data profiling, cleansing. Experience developing data pipelines using Azure Data Factory.

Posted 1 week ago

Apply

5.0 - 10.0 years

30 - 35 Lacs

Kolkata

Work from Office

Naukri logo

Diverse Lynx is looking for Data Engineer to join our dynamic team and embark on a rewarding career journey Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.

Posted 1 week ago

Apply

8.0 - 13.0 years

30 - 35 Lacs

Chennai

Work from Office

Naukri logo

KC International School is looking for Data Engineer to join our dynamic team and embark on a rewarding career journey Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs. The DE at KC will design, develop and maintain all school data infrastructure ensuring accurate and efficient data management.

Posted 1 week ago

Apply

10.0 - 15.0 years

32 - 37 Lacs

Bengaluru

Work from Office

Naukri logo

Payments Industry is a very exciting and fast-developing area with lot of new and innovative solutions coming to market. With strong demand for new solutions in this space, it promises to be an exciting area of innovation. VISA is a strong leader in the payment industry and is rapidly transitioning into a technology company with significant investments in this area. If you want to be in the exciting payment space, learn fast and make big impacts, Ecosystem & Operational Risk technology which is part of Visa s Value-Added Services business unit is an ideal place for you! In Ecosystem & Operational Risk (EOR) technology group, the Payment Fraud Disruption team is responsible for building critical risk and fraud detection and prevention applications and services at Visa. This includes idea generation, architecture, design, development, and testing of products, applications, and services that provide Visa clients with solutions to detect, prevent, and mitigate fraud for Visa and Visa client payment systems. We are in search of inquisitive, creative, and skillful technologists to join our ranks. We are currently looking for a Director of Software Engineering who will take the lead and manage several strategic initiatives within our organization. The right candidate will possess strong software engineering background, with demonstrated leadership experience in driving technical architecture, design and delivery of products and services that have created business value and delivered impact within the payments or payments risk domain or similar industries. This position is ideal for an experienced engineering leader who is passionate about collaborating with business and technology partners and engineers to solve challenging business problems. You will be a key driver in the effort to define the shared strategic vision for the Payment Fraud Disruption platform and defining tools and services that safeguard Visa s payment systems. This is a hybrid position. Expectation of days in office will be confirmed by your Hiring Manager. Basic Qualifications 10+ years of relevant work experience and a Bachelors degree, OR 13+ years of relevant work experience Preferred Qualifications 12 or more years of work experience with a Bachelor s Degree or 8-10 years of experience with an Advanced Degree (e.g. Masters, MBA, JD, MD) or 6+ years of work experience with a PhD Experience leading delivery and deployment of ML models, model refresh, and experimentation. Experience leading product development & delivery of AI/ML solutions, applied to real-world problems Experience leading design and development of mission-critical, secure, reliable systems Experience leading delivery across multiple technologies: Python, Java/J2EE, Apache Kafka, Apache Flink, Hive, MySQL, Hadoop, Spark, Scala, design patterns, test automation frameworks Experience leading delivery of streaming analytics platforms Excellent understanding of algorithms and data structures Excellent problem solving and analytics skills. Capable of forming and advocating an independent viewpoint Strong experience with agile methodologies Excel in partnering with Product leaders and technical product managers on requirements workshops, helping define joint product/technology roadmaps & driving prioritization Experience driving continuous improvements to processes/tools for better developer efficiency and productivity Demonstrated ability to drive measurable improvements across engineering, delivery and performance metrics Demonstrated success in leading high performing, multi-disciplinary and geographically distributed engineering teams. Demonstrated ability to hire, develop and retain high-caliber talent Must demonstrate longer-term strategic thinking and staying abreast with latest technologies to assess what s possible technically Strong collaboration and effective communication, with focus on win-win outcomes

Posted 1 week ago

Apply

0.0 - 1.0 years

2 - 3 Lacs

Bengaluru

Work from Office

Naukri logo

We at Barracuda are at the forefront of protecting our customers from email-borne threats and data leaks. As an Analyst you will be having an opportunity to work with a core team of Threat Analysts who are specialized in stopping malicious traffic and content from reaching our customers. What you'll be working on: Analyze attack patterns and data trends, identifying drift and anomalies. Conduct root cause analysis and develop hypotheses for missed attacks. Report findings to the ML team with impact assessments. Evaluate feature importance and model effectiveness. Build reports, queries, and dashboards for insights. What you bring to the role: 0-1 year experience in Python, Scala, SQL, Databricks, and Spark. Hands-on experience in ML model development (MLLib, TensorFlow, PyTorch, Scikit). Understanding of email security (spam/phishing) is a plus. Strong analytical, communication, and adaptability skills. Willingness to work in a fast-paced, shift-based environment.

Posted 1 week ago

Apply

3.0 - 8.0 years

4 - 8 Lacs

Chennai

Work from Office

Naukri logo

Your Profile As a senior software engineer with Capgemini, you will have 3 + years of experience in Scala with strong project track record Hands On experience in Scala/Spark developer Hands on SQL writing skills on RDBMS (DB2) databases Experience in working with different file formats like JSON, Parquet, AVRO, ORC and XML. Must have worked in a HDFS platform development project. Proficiency in data analysis, data profiling, and data lineage Strong oral and written communication skills Experience working in Agile projects. Your Role Work on Hadoop, Spark, Hive &SQL query Ability to perform code optimization for performance, Scalability and configurability Data application development at scale in the Hadoop ecosystem. What youll love about working here ChoosingCapgeminimeans having the opportunity to make a difference, whetherfor the worlds leading businesses or for society. It means getting the support youneed to shape your career in the way that works for you. It means when the futuredoesnt look as bright as youd like, youhave the opportunity tomake changetorewrite it. When you join Capgemini, you dont just start a new job. You become part of something bigger. A diverse collective of free-thinkers, entrepreneurs and experts, all working together to unleash human energy through technology, for an inclusive and sustainable future. At Capgemini, people are at the heart of everything we do! You can exponentially grow your career by being part of innovative projects and taking advantage of our extensiveLearning & Developmentprograms. With us, you will experience aninclusive, safe, healthy, andflexiblework environment to bring out the best in you! You also get a chance to make positive social change and build a better world by taking an active role in ourCorporate Social ResponsibilityandSustainabilityinitiatives. And whilst you make a difference, you will also have a lot offun. About Company

Posted 1 week ago

Apply

2.0 - 7.0 years

4 - 8 Lacs

Mumbai

Work from Office

Naukri logo

Your Role Python Developer As a Python developer you must have 2+ years in Python / Pyspark. Strong programming experience, Python, Pyspark, Scala is preferred. Experience in designing and implementing CI/CD, Build Management, and Development strategy. Experience with SQL and SQL Analytical functions, experience participating in key business, architectural and technical decisions Scope to get trained on AWS cloud technology Your Profile Python SQL Data Engineer What youll love about working here Choosing Capgemini means having the opportunity to make a difference, whether for the worlds leading businesses or for society. It means getting the support you need to shape your career in the way that works for you. It means when the future doesnt look as bright as youd like, you have the opportunity to make changeto rewrite it. When you join Capgemini, you dont just start a new job. You become part of something bigger. A diverse collective of free-thinkers, entrepreneurs and experts, all working together to unleash human energy through technology, for an inclusive and sustainable future. At Capgemini, people are at the heart of everything we do! You can exponentially grow your career by being part of innovative projects and taking advantage of our extensive Learning & Development programs. With us, you will experience an inclusive, safe, healthy, and flexible work environment to bring out the best in you! You also get a chance to make positive social change and build a better world by taking an active role in our Corporate Social Responsibility and Sustainability initiatives. And whilst you make a difference, you will also have a lot of fun. About Capgemini

Posted 1 week ago

Apply

2.0 - 7.0 years

4 - 7 Lacs

Pune

Work from Office

Naukri logo

Your Role Pyspark Data Engineer As a Pyspark developer you Must have 2+ years in Pyspark. Strong programming experience, Python, Pyspark, Scala is preferred. Experience indesigning and implementing CI/CD, Build Management, and Development strategy. Experience with SQL and SQL Analytical functions, experience participating in key business, architectural and technical decisions Scope to get trained on AWS cloud technology Your Profile Pyspark SQL Data Engineer What you will love about working here Choosing Capgemini means having the opportunity to make a difference, whether for the worlds leading businesses or for society. It means getting the support you need to shape your career in the way that works for you. It means when the future doesnt look as bright as youd like, you have the opportunity to make changeto rewrite it. When you join Capgemini, you dont just start a new job. You become part of something bigger. A diverse collective of free-thinkers, entrepreneurs and experts, all working together to unleash human energy through technology, for an inclusive and sustainable future. At Capgemini, people are at the heart of everything we do! You can exponentially grow your career by being part of innovative projects and taking advantage of our extensive Learning & Development programs. With us, you will experience an inclusive, safe, healthy, and flexible work environment to bring out the best in you! You also get a chance to make positive social change and build a better world by taking an active role in our Corporate Social Responsibility and Sustainability initiatives. And whilst you make a difference, you will also have a lot of fun. About Capgemini

Posted 1 week ago

Apply

3.0 - 6.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

Your Role Strong Spark programming experience with Java Good knowledge of SQL query writing and shell scripting Experience working in Agile mode Analyze, Design, develop, deploy and operate high-performant and high-quality services that serve users in a cloud environment. Good understanding of client eco system and expectations In charge of code reviews, integration process, test organization, quality of delivery Take part in development. Experienced into writing queries using SQL commands. Experienced with deploying and operating the codes in the cloud environment. Experienced in working without much supervision. Your Profile Primary Skill Java, Spark, SQL Secondary Skill/Good to have Hadoop or any cloud technology, Kafka, or BO. What youll love about working hereShort Description Choosing Capgemini means having the opportunity to make a difference, whether for the worlds leading businesses or for society. It means getting the support you need to shape your career in the way that works for you. It means when the future doesnt look as bright as youd like, you have the opportunity to make changeto rewrite it. When you join Capgemini, you dont just start a new job. You become part of something bigger. A diverse collective of free-thinkers, entrepreneurs and experts, all working together to unleash human energy through technology, for an inclusive and sustainable future. At Capgemini, people are at the heart of everything we do! You can exponentially grow your career by being part of innovative projects and taking advantage of our extensive Learning & Development programs. With us, you will experience an inclusive, safe, healthy, and flexible work environment to bring out the best in you! You also get a chance to make positive social change and build a better world by taking an active role in our Corporate Social Responsibility and Sustainability initiatives. And whilst you make a difference, you will also have a lot of fun. About Capgemini

Posted 1 week ago

Apply

5.0 years

0 Lacs

Bengaluru East, Karnataka, India

On-site

Linkedin logo

Jobs 03/18/2020 Carmatec is looking for passionate DevOps Engineers to be a part of our InstaCarma team. Not only will you have the chance to make your mark as an established DevOps Engineer, but you will also get to work and interact with seasoned professionals deeply committed to revolutionize the Cloud scenario. Job Responsibilities Work on Infrastructure provisioning/configuration management too ls. We use Packer, Terraform and Chef. Develop automation tools/scripts. We use Bash/Python/Ruby Responsible for Continuous integration and artefact management. We use Jenkins and Artifactory Setup automated deployment pipelines for microservices running as Docker containers. Setup monitoring, alerting and metrics scraping for java/scala/play applications using Prometheus and Graylog2 integrated with PagerDuty and Hipchat for alerting,reporting and monitoring. Will be doing on-call Production support an d related Incident Management, reporting & Postmortem. Create runbooks, wikis for incidents, troubleshooting performed etc. Be a proactive member of your team by sharing knowledge. Resource scheduling,orchestration using Mesos/Marathon Work closely with development teams to ensure that platforms are designed with operability in mind Function well in a fast-paced, rapidly changing environment. Required Skills A basic understanding of DevOps tools and automation framework Outstanding organization, documentation, and communication skills. Must be skilled in Linux System Administration (Ubuntu/Centos) Knowledge of AWS is a must. (EC2, EBS, S3, Route53, Cloudfront, SG, IAM, RDS etc.) Strong foundation in Docker internals and troubleshooting. Should know at least one configuration management tool – Chef/Ansible/Puppet Good to have experience at least in one scripting language – Bash/Python/Ruby Experience is an at- least one NoSQL Database Systems is a plus. – Elasticsearch/Mongodb/Redis/Cassandra Experience in a CI tool like Jenkins is preferred. Good understanding of how a 3-tier architecture works. Basic knowledge in any revision control tools like Git/Subversion etc. Should have experience working with monitoring tools like Nagios, Newrelic etc. Should be proficient in log management using tools like rsyslog, logstash etc. Working knowledge of the following items – cron, haproxy/nginx, lvm, MySql, BIND (DN S), iptables. Experience in Atlassian Tools – Jira, Hipchat,Confluence will be a plus. Experience: 5+ years Location: Bangalore If the above description is of your interest, please revert to us with your updated resume to teamhr@carmatec.com Apply now Show more Show less

Posted 1 week ago

Apply

Exploring Scala Jobs in India

Scala is a popular programming language that is widely used in India, especially in the tech industry. Job seekers looking for opportunities in Scala can find a variety of roles across different cities in the country. In this article, we will dive into the Scala job market in India and provide valuable insights for job seekers.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Chennai
  5. Mumbai

These cities are known for their thriving tech ecosystem and have a high demand for Scala professionals.

Average Salary Range

The salary range for Scala professionals in India varies based on experience levels. Entry-level Scala developers can expect to earn around INR 6-8 lakhs per annum, while experienced professionals with 5+ years of experience can earn upwards of INR 15 lakhs per annum.

Career Path

In the Scala job market, a typical career path may look like: - Junior Developer - Scala Developer - Senior Developer - Tech Lead

As professionals gain more experience and expertise in Scala, they can progress to higher roles with increased responsibilities.

Related Skills

In addition to Scala expertise, employers often look for candidates with the following skills: - Java - Spark - Akka - Play Framework - Functional programming concepts

Having a good understanding of these related skills can enhance a candidate's profile and increase their chances of landing a Scala job.

Interview Questions

Here are 25 interview questions that you may encounter when applying for Scala roles:

  • What is Scala and why is it used? (basic)
  • Explain the difference between val and var in Scala. (basic)
  • What is pattern matching in Scala? (medium)
  • What are higher-order functions in Scala? (medium)
  • How does Scala support functional programming? (medium)
  • What is a case class in Scala? (basic)
  • Explain the concept of currying in Scala. (advanced)
  • What is the difference between map and flatMap in Scala? (medium)
  • How does Scala handle null values? (medium)
  • What is a trait in Scala and how is it different from an abstract class? (medium)
  • Explain the concept of implicits in Scala. (advanced)
  • What is the Akka toolkit and how is it used in Scala? (medium)
  • How does Scala handle concurrency? (advanced)
  • Explain the concept of lazy evaluation in Scala. (advanced)
  • What is the difference between List and Seq in Scala? (medium)
  • How does Scala handle exceptions? (medium)
  • What are Futures in Scala and how are they used for asynchronous programming? (advanced)
  • Explain the concept of type inference in Scala. (medium)
  • What is the difference between object and class in Scala? (basic)
  • How can you create a Singleton object in Scala? (basic)
  • What is a higher-kinded type in Scala? (advanced)
  • Explain the concept of for-comprehensions in Scala. (medium)
  • How does Scala support immutability? (medium)
  • What are the advantages of using Scala over Java? (basic)
  • How do you implement pattern matching in Scala? (medium)

Closing Remark

As you explore Scala jobs in India, remember to showcase your expertise in Scala and related skills during interviews. Prepare well, stay confident, and you'll be on your way to a successful career in Scala. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies