Home
Jobs

2810 Scala Jobs - Page 18

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 5.0 years

5 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

Educational Requirements Bachelor of Engineering Service Line Data & Analytics Unit Responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you!If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Additional Responsibilities: Knowledge of more than one technology Basics of Architecture and Design fundamentals Knowledge of Testing tools Knowledge of agile methodologies Understanding of Project life cycle activities on development and maintenance projects Understanding of one or more Estimation methodologies, Knowledge of Quality processes Basics of business domain to understand the business requirements Analytical abilities, Strong Technical Skills, Good communication skills Good understanding of the technology and domain Ability to demonstrate a sound understanding of software quality assurance principles, SOLID design principles and modelling methods Awareness of latest technologies and trends Excellent problem solving, analytical and debugging skills Technical and Professional Requirements: Technology->Big Data - Data Processing->Spark Preferred Skills: Technology->Java->Apache->Scala Technology->Functional Programming->Scala

Posted 5 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Overview This position is for Lead Data Engineer in the Commercial Data as a Service group. In this position you will enjoy being responsible for helping define and maintain the data systems key to delivering successful outcomes for our customers. You will be hands on and work closely to guide a team of Data Engineers in the associated data maintenance, integrations, enhancements, loads and transformation processes for the organization. This key individual will work closely with Data Architects to design and implement solutions and insure successful implementations. Role Leads initiatives to build and maintain database technologies, environments, and applications, seeking opportunities for improvements and efficiencies Architects internal data solutions as part of the full stack to include data modelling, integration with file based as well as event driven upstream systems Writes SQL statement procedures to optimize SQL execution and query development Effectively utilizes various tools such as Spark (Scala, Python), Nifi, Spark streaming, Informatica for data ETL, Manages the deployment of data solutions that are optimally standardized and database updates to meet project deliverables Leads database security posture, which includes proactively identifying security risks and implementing both risk mitigation plans and control functions Oversees the resolution of chronic complex problems to prevent future data performance issues Supports process improvement efforts to identify and test opportunities for automation and/or reduction in time to deployment Responsible for complex design (in conjunction with Data Architects), development, and performance and system testing, and provides functional guidance, advice to experienced engineers Mentors junior staff by providing training to develop technical skills and capabilities across the team All about you Experience developing a specialization in a particular functional area (e.g., modeling, data loads, transformations, replication, performance tuning, logical and physical database design, performance and troubleshooting, data replication, backup and recovery, and data security) leveraging Apache Spark, Nifi, Databricks, Snowflake, Informatica, streaming solutions. Experience leading a major work stream or multiple smaller work streams for a large domain initiative, often providing technical guidance and advice to project team members Experience creating deliverables within the global database technology domains and sub-domains, supporting cross-functional leaders in the technical community to derive new solutions Experience supporting automation and/or cloud delivery effort; may perform financial and cost analysis Experience in database architecture or other relevant IT experience Experience in leading business system application and database architecture design, influencing technology direction in range of breadth of IT areas Show more Show less

Posted 5 days ago

Apply

140.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About NCR VOYIX NCR VOYIX Corporation (NYSE: VYX) is a leading global provider of digital commerce solutions for the retail, restaurant and banking industries. NCR VOYIX is headquartered in Atlanta, Georgia, with approximately 16,000 employees in 35 countries across the globe. For nearly 140 years, we have been the global leader in consumer transaction technologies, turning everyday consumer interactions into meaningful moments. Today, NCR VOYIX transforms the stores, restaurants and digital banking experiences with cloud-based, platform-led SaaS and services capabilities. Not only are we the leader in the market segments we serve and the technology we deliver, but we create exceptional consumer experiences in partnership with the world’s leading retailers, restaurants and financial institutions. We leverage our expertise, R&D capabilities and unique platform to help navigate, simplify and run our customers’ technology systems. Our customers are at the center of everything we do. Our mission is to enable stores, restaurants and financial institutions to exceed their goals – from customer satisfaction to revenue growth, to operational excellence, to reduced costs and profit growth. Our solutions empower our customers to succeed in today’s competitive landscape. Our unique perspective brings innovative, industry-leading tech to all the moving parts of business across industries. NCR VOYIX has earned the trust of businesses large and small — from the best-known brands around the world to your local favorite around the corner. Primary responsibility is to develop high quality software solutions as a contributing member of a highly motivated team of Engineers. Should be able to understand the what goes behind the building of a complex resilient scalable enterprise products and should contribute through design and development. This individual will hold the title “Software Engineer II” with the expectation to solve complex technical challenges and assist in laying out technical roadmap. Should have had hands on complex applications/solutions which has integrations with various components. Experience with production systems and migrating customers from legacy systems to later versions is preferred. Advanced knowledge on the best practices on enterprise applications – logging, communication, coding, testing and CI/CD pipeline is expected. The primary solution stack technology for this position is Java with other preferred skills referred below. Responsibilities include: Develop high quality software which meets requirements, promotes re-use of software components, and facilitates ease of support. Diagnose, isolate, and implement remedies for system failures caused by errors in software code. Identifies and implements process improvements in Engineering practices. Utilize software-based system maintenance and tracking tools. Provide input and technical content for technical documentation, user help materials and customer training. Conduct unit tests, track problems, and implement changes to ensure adherence to test plan and functional/nonfunctional requirements Analyze, design and implement software mechanisms to improve code stability, performance, and reusability. Participates and leads code review sessions. Create high fidelity estimates of their own work efforts. Assist others in estimating task effort and dependencies, responsible for team commitments within the Sprint. May be asked to lead and advise other Engineering resources as part of project activities. Considered subject matter experts in their chosen field. Participates with industry groups, stays current with technology and industry trends, disseminates knowledge to team members, forms best practices. Communicate with Solution Management and other internal teams. Participates in cross-functional collaboration within the organization. Works with developers to assist detailed problem resolution for difficult problems which are proving difficult for Lead Developers to resolve. Works on improving use of tools relating to AMS development/tools used. BASIC QUALIFICATIONS: Bachelor’s degree in computer science or related field A minimum of 6 years of experience in software design and development A minimum of 6 years of experience in preferred technology stack, Must Have Very strong development experience Java 11, Spring, Sprint boot. API based design and development using REST API and Graphql Multi-threading Concepts Unit testing and integration testing frameworks like Junit5, Mockito Messaging services. Strong understanding and affinity towards building scalable and robust solutions. Very strong understanding of NOSQL(MongoDB) and SQL DBS In depth understanding of Design Patterns and ability to design a Class Model, Data Model for a given requirement. Experience with CI/AppSec tools like – Sonar, Coverity, Whitesource etc. Strong in Debugging, Memory Leaks, Profiling, Crashes, etc Good to Have Hands on development experience with Linux OS Good understanding of NFT Performance; scalability and availability and familiarity with Tools Cloud Native Application Development Linux OS and scripting Should be familiar with HTTPs/SSL Networking concepts like how to setup and configure name servers and network interfaces Load Balancers Must have hands on any of the two from the following skill sets Docker and K8s Azure / GCP Cucumber Scala Helm Deep understanding of Software Development and Quality Assurance best practices Excellent written and verbal communication skills Excellent teamwork and collaboration skills Experience operating in an Agile environment, with a deep understanding of agile development principles. Familiarity with Continuous Improvement and Six Sigma Lean principles. Offers of employment are conditional upon passage of screening criteria applicable to the job EEO Statement Integrated into our shared values is NCR Voyix’s commitment to diversity and equal employment opportunity. All qualified applicants will receive consideration for employment without regard to sex, age, race, color, creed, religion, national origin, disability, sexual orientation, gender identity, veteran status, military service, genetic information, or any other characteristic or conduct protected by law. NCR Voyix is committed to being a globally inclusive company where all people are treated fairly, recognized for their individuality, promoted based on performance and encouraged to strive to reach their full potential. We believe in understanding and respecting differences among all people. Every individual at NCR Voyix has an ongoing responsibility to respect and support a globally diverse environment. Statement to Third Party Agencies To ALL recruitment agencies: NCR Voyix only accepts resumes from agencies on the preferred supplier list. Please do not forward resumes to our applicant tracking system, NCR Voyix employees, or any NCR Voyix facility. NCR Voyix is not responsible for any fees or charges associated with unsolicited resumes “When applying for a job, please make sure to only open emails that you will receive during your application process that come from a @ncrvoyix.com email domain.” Show more Show less

Posted 5 days ago

Apply

9.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Description Job title: Big Data Location: Bangalore/Mumbai/Pune/Chennai Candidate Specification Candidate should have 9+ Years in Big Data, JAVA with Scala or Hadoop with Scala. Job Description Design, develop, and maintain scalable big data architectures and systems. Implement data processing pipelines using technologies such as Hadoop, Spark, and Kafka. Optimize data storage and retrieval processes to ensure high performance and reliability. Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions. Perform data modeling, mining, and production processes to support business needs. Ensure data quality, governance, and security across all data systems. Stay updated with the latest trends and advancements in big data technologies. Experience with real-time data processing and stream analytics. Knowledge of advanced analytics and data visualization tools. Knowledge of DevOps practices and tools for continuous integration and deployment Experience in managing big data projects and leading technical teams. Skills Required RoleBig Data - Manager Industry TypeIT/ Computers - Software Functional Area Required Education B E Employment TypeFull Time, Permanent Key Skills BIGDATA HADOOP J A V A SCALA Other Information Job CodeGO/JC/224/2025 Recruiter NameDevikala D Show more Show less

Posted 5 days ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description Job Title: Data Engineer Candidate Specification: 5 + years, Immediate to 30 days. (All 5 Days work from office for 9 Hours). Job Description Experience with any modern ETL tools (PySpark or EMR, or Glue or others). Experience in AWS, programming knowledge in python, Java, Snowflake. Experience in DBT, StreamSets (or similar tools like Informatica, Talend), migration work done in the past. Agile experience is required with Version One or Jira tool expertise. Provide hands-on technical solutions to business challenges & translates them into process/ technical solutions. Good knowledge of CI/CD and DevOps principles. Experience in data technologies - Hadoop PySpark / Scala (Any one) Skills Required RoleData Engineer Industry TypeIT/ Computers - Software Functional AreaIT-Software Required Education B Tech Employment TypeFull Time, Permanent Key Skills PYSPARK. EMR GLUE ETL TOOL AWS CI/CD DEVOPS Other Information Job CodeGO/JC/102/2025 Recruiter NameSheena Rakesh Show more Show less

Posted 5 days ago

Apply

5.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Job Description Job Title: Scala Developer – Chennai Candidate Specification: 5+years, Notice – Immediate to 30 days Job Description A minimum of 5 years of experience in Scala development. Strong programming and problem-solving skills. Hands-on experience with functional programming concepts. Familiarity with relevant frameworks and tools (if applicable). Design, implement, and maintain Scala applications. Collaborate with cross-functional teams to define and develop new features. Write clean, maintainable, and efficient code. Troubleshoot, debug, and optimize application performance. Contribute to the entire development lifecycle, including concept, design, build, deploy, test, release, and support. Experience with cloud platforms such as AWS. Skills Required RoleScala Developer - Chennai Industry TypeIT/ Computers - Software Functional Area Required Education Employment TypeFull Time, Permanent Key Skills SCALA DEVELOPER FUNCTIONAL PROGRAMMING Other Information Job CodeGO/JC/050/2025 Recruiter Name Show more Show less

Posted 5 days ago

Apply

3.0 - 8.0 years

6 - 10 Lacs

Bengaluru

Work from Office

Naukri logo

Educational Requirements Bachelor of Engineering Service Line Data & Analytics Unit Responsibilities A day in the life of an Infoscion As part of the Infosys consulting team, your primary role would be to actively aid the consulting team in different phases of the project including problem definition, effort estimation, diagnosis, solution generation and design and deployment You will explore the alternatives to the recommended solutions based on research that includes literature surveys, information available in public domains, vendor evaluation information, etc. and build POCs You will create requirement specifications from the business needs, define the to-be-processes and detailed functional designs based on requirements. You will support configuring solution requirements on the products; understand if any issues, diagnose the root-cause of such issues, seek clarifications, and then identify and shortlist solution alternatives You will also contribute to unit-level and organizational initiatives with an objective of providing high quality value adding solutions to customers. If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Additional Responsibilities: Ability to work with clients to identify business challenges and contribute to client deliverables by refining, analyzing, and structuring relevant data Awareness of latest technologies and trends Logical thinking and problem solving skills along with an ability to collaborate Ability to assess the current processes, identify improvement areas and suggest the technology solutions One or two industry domain knowledge Technical and Professional Requirements: Primary skills:Bigdata->Scala,Bigdata->Spark,Technology->Java->Play Framework,Technology->Reactive Programming->Akka Preferred Skills: Bigdata->Spark Bigdata->Scala Technology->Reactive Programming->Akka Technology->Java->Play Framework

Posted 5 days ago

Apply

0.0 - 5.0 years

0 Lacs

Pune, Maharashtra

On-site

Indeed logo

Job Description : Roles and Responsibilities: Design, develop, and maintain Scala-based microservices Build scalable and reactive systems using Akka or LEGOM framework Implement real-time data pipelines with Apache Pulsar Develop and optimize data access using Slick Connector and PostgreSQL Build advanced search capabilities using ElasticSearch Work on containerized applications and deploy them using Kubernetes Set up and manage CI/CD pipelines using GitLab Collaborate with cross-functional teams to ensure on-time, high-quality deliveries Technical Skills: Strong hands-on experience with Scala Proficient in Akka or LEGOM framework Expertise in Microservices architecture and containerization Knowledge of Apache Pulsar for streaming Experience in PostgreSQL and Slick Connector for DB integration Proficient with ElasticSearch Familiarity with GitLab , CI/CD Pipelines , and Kubernetes (K8s) Excellent problem-solving and debugging skills Strong communication and collaboration abilities Job Type: Contractual / Temporary Location Type: In-person Application Question(s): How many years of experience do you have as a Lead? Are you comfortable for F2F Interview (If required)? Experience: P-SQL: 4 years (Required) Elasticsearch: 5 years (Required) Scala: 5 years (Required) Akka or LEGOM framework: 5 years (Required) Microservices: 5 years (Required) Location: Pune, Maharashtra (Required) Work Location: In person

Posted 5 days ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

You Lead the Way. We’ve Got Your Back. At American Express, we know that with the right backing, people and businesses have the power to progress in incredible ways. Whether we’re supporting our customers’ financial confidence to move ahead, taking commerce to new heights, or encouraging people to explore the world, our colleagues are constantly redefining what’s possible — and we’re proud to back each other every step of the way. When you join #TeamAmex, you become part of a diverse community of over 60,000 colleagues, all with a common goal to deliver an exceptional customer experience every day. We back our colleagues with the support they need to thrive, professionally and personally. That’s why we have Amex Flex, our enterprise working model that provides greater flexibility to colleagues while ensuring we preserve the important aspects of our unique in-person culture. We are building an energetic, high-performance team with a nimble and creative mindset to drive our technology and products. American Express (AXP) is a powerful brand, a great place to work and has unparalleled scale. Join us for an exciting opportunity in the Marketing Technology within American Express Technologies. How will you make an impact in this role? There are hundreds of opportunities to make your mark on technology and life at American Express. Here's just some of what you'll be doing: As a part of our team, you will be developing innovative, high quality, and robust operational engineering capabilities. Develop software in our technology stack which is constantly evolving but currently includes Big data, Spark, Python, Scala, GCP, Adobe Suit ( like Customer Journey Analytics ). Work with Business partners and stakeholders to understand functional requirements, architecture dependencies, and business capability roadmaps. Create technical solution designs to meet business requirements. Define best practices to be followed by team. Taking your place as a core member of an Agile team driving the latest development practices Identify and drive reengineering opportunities, and opportunities for adopting new technologies and methods. Suggest and recommend solution architecture to resolve business problems. Perform peer code review and participate in technical discussions with the team on the best solutions possible. As part of our diverse tech team, you can architect, code and ship software that makes us an essential part of our customers' digital lives. Here, you can work alongside talented engineers in an open, supportive, inclusive environment where your voice is valued, and you make your own decisions on what tech to use to solve challenging problems. American Express offers a range of opportunities to work with the latest technologies and encourages you to back the broader engineering community through open source. And because we understand the importance of keeping your skills fresh and relevant, we give you dedicated time to invest in your professional development. Find your place in technology of #TeamAmex. Minimum Qualifications: · BS or MS degree in computer science, computer engineering, or other technical discipline, or equivalent work experience. · 5+ years of hands-on software development experience with Big Data & Analytics solutions – Hadoop Hive, Spark, Scala, Hive, Python, shell scripting, GCP Cloud Big query, Big Table, Airflow. · Working knowledge of Adobe suit like Adobe Experience Platform, Adobe Customer Journey Analytics, CDP. · Proficiency in SQL and database systems, with experience in designing and optimizing data models for performance and scalability. · Design and development experience with Kafka, Real time ETL pipeline, API is desirable. · Experience in designing, developing, and optimizing data pipelines for large-scale data processing, transformation, and analysis using Big Data and GCP technologies. · Certifications in cloud platform (GCP Professional Data Engineer) is a plus. · Understanding of distributed (multi-tiered) systems, data structures, algorithms & Design Patterns. · Strong Object-Oriented Programming skills and design patterns. · Experience with CICD pipelines, Automated test frameworks, and source code management tools (XLR, Jenkins, Git, Maven). · Good knowledge and experience with configuration management tools like GitHub · Ability to analyze complex data engineering problems, propose effective solutions, and implement them effectively. · Looks proactively beyond the obvious for continuous improvement opportunities. · Communicates effectively with product and cross functional team. We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations. Show more Show less

Posted 5 days ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

The opportunity: SAP Commerce (Hybris) Developer required to work in an experienced team of software architects and developers, to be responsible for the design, development and testing of quality code to meet customer driven specifications. What you’ll be doing: Working within a project team, to deliver high quality code to deadlines. Guide and instruct other developers in delivery high quality and robust SAP Commerce solution. To clearly communicate what is to be done and the milestones achieved to those within the project in an agreed manner. To realistically estimate team’s delivery timescales. To solve problems posed using the tools and materials provided or to suggest alternatives where appropriate. To create robust solutions using the tools and materials provided or to suggest alternatives where appropriate Take responsibility from and can deputise for the SAP Commerce architect. Lead technical discussions on problem solving and solution design. To mentor junior developers. To be a motivated self-starter. What we want from you: Extensive Hybris development experience (ideally 2011 +) Extensive experience coding in Java language (Java17 +) Experience guiding 3 or more SAP Commerce developers. Experience working on retail domain. Experience with Data structures Exposure to Web technologies Object oriented software design patterns experience Some understanding of HTML5, CSS and JavaScript Familiarity with Windows or Linux operating system Strong spoken and written communication If you know some of this, even better: Experience of delivering software as part of a team Experience of Spring Knowledge of JavaScript, front end technologies Knowledge of other JVM based languages - Groovy, Scala, Clojure Knowledge of one or more scripting languages, such as Groovy, Python Knowledge of webservices technologies such as SOAP, REST, JSON Knowledge of relational database platforms Oracle, SQL Server, MySQL Knowledge of NoSQL database platforms such as Cassandra or MongoDB Knowledge of message queuing systems such as Apache Kafka or RabbitMQ Contributions to open source projects About VML VML is a leading creative company that combines brand experience, customer experience, and commerce to create connected brands and drive growth. VML is celebrated for its innovative and award-winning work for blue chip client partners including AstraZeneca, Colgate-Palmolive, Dell, Ford, Intel, Microsoft, Nestlé, The Coca-Cola Company, and Wendy's. The agency is recognized by the Forrester Wave™ Reports, which name WPP as a “Leader” in Commerce Services, Global Digital Experience Services, Global Marketing Services and, most recently, Marketing Measurement & Optimization. As the world’s most advanced and largest creative company, VML’s global network is powered by 30,000 talented people across 60-plus markets, with principal offices in Kansas City, New York, Detroit, London, São Paulo, Shanghai, Singapore, and Sydney. Show more Show less

Posted 5 days ago

Apply

0.0 - 2.0 years

0 Lacs

Raipur, Chhattisgarh

On-site

Indeed logo

Company Name- Interbiz Consulting Pvt Ltd Position/Designation- Data Engineer Job Location- Raipur (C.G.) Mode- Work from office Experience- 2 to 5 Years We are seeking a talented and detail-oriented Data Engineer to join our growing Data & Analytics team. You will be responsible for building and maintaining robust, scalable data pipelines and infrastructure to support data-driven decision-making across the organization. Key Responsibilities Design and implement ETL/ELT data pipelines for structured and unstructured data using Azure Data Factory , Databricks , or Apache Spark . Work with Azure Blob Storage , Data Lake , and Synapse Analytics to build scalable data lakes and warehouses. Develop real-time data ingestion pipelines using Apache Kafka , Apache Flink , or Apache Beam . Build and schedule jobs using orchestration tools like Apache Airflow or Dagster . Perform data modeling using Kimball methodology for building dimensional models in Snowflake or other data warehouses. Implement data versioning and transformation using DBT and Apache Iceberg or Delta Lake . Manage data cataloging and lineage using tools like Marquez or Collibra . Collaborate with DevOps teams to containerize solutions using Docker , manage infrastructure with Terraform , and deploy on Kubernetes . Setup and maintain monitoring and alerting systems using Prometheus and Grafana for performance and reliability. Required Skills and Qualifications Qualifications Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related field. [1–5+] years of experience in data engineering or related roles. Proficiency in Python , with strong knowledge of OOP and data structures & algorithms . Comfortable working in Linux environments for development and deployment. Strong command over SQL and understanding of relational (DBMS) and NoSQL databases. Solid experience with Apache Spark (PySpark/Scala). Familiarity with real-time processing tools like Kafka , Flink , or Beam . Hands-on experience with Airflow , Dagster , or similar orchestration tools. Deep experience with Microsoft Azure , especially Azure Data Factory , Blob Storage , Synapse , Azure Functions , etc. AZ-900 or other Azure certifications are a plus. Knowledge of dimensional modeling , Snowflake , Apache Iceberg , and Delta Lake . Understanding of modern Lakehouse architecture and related best practices. Familiarity with Marquez , Collibra , or other cataloging tools. Experience with Terraform , Docker , Kubernetes , and Jenkins or equivalent CI/CD tools. Proficiency in setting up dashboards and alerts with Prometheus and Grafana . Interested candidates may share their CV on swapna.rani@interbizconsulting.com or visit www.interbizconsulting.com Note:- Immediate joiner will be preferred. Job Type: Full-time Pay: From ₹25,000.00 per month Benefits: Food provided Health insurance Leave encashment Provident Fund Supplemental Pay: Yearly bonus Application Question(s): Do you have at least 2 years of work experience in Python? Do you have at least 2 years of work experience in Data Science? Are you from Raipur, Chhattisgarh? Are you willing to work for more than 2 years? What is your notice period? What is your current salary and what you are expecting? Work Location: In person

Posted 5 days ago

Apply

6.0 years

0 Lacs

India

Remote

Linkedin logo

AI/ML Engineer – Senior Consultant AI Engineering Group is part of Data Science & AI Competency Center and is focusing technical and engineering aspects of DS/ML/AI solutions. We are looking for experienced AI/ML Engineers to join our team to help us bring AI/ML solutions into production, automate processes, and define reusable best practices and accelerators. Duties description: The person we are looking for will become part of DataScience and AI Competency Center working in AI Engineering team. The key duties are: Building high-performing, scalable, enterprise-grade ML/AI applications in cloud environment Working with Data Science, Data Engineering and Cloud teams to implement Machine Learning models into production Practical and innovative implementations of ML/AI automation, for scale and efficiency Design, delivery and management of industrialized processing pipelines Defining and implementing best practices in ML models life cycle and ML operations Implementing AI/MLOps frameworks and supporting Data Science teams in best practices Gathering and applying knowledge on modern techniques, tools and frameworks in the area of ML Architecture and Operations Gathering technical requirements & estimating planned work Presenting solutions, concepts and results to internal and external clients Being Technical Leader on ML projects, defining task, guidelines and evaluating results Creating technical documentation Supporting and growing junior engineers Must have skills: Good understanding of ML/AI concepts: types of algorithms, machine learning frameworks, model efficiency metrics, model life-cycle, AI architectures Good understanding of Cloud concepts and architectures as well as working knowledge with selected cloud services, preferably GCP Experience in programming ML algorithms and data processing pipelines using Python At least 6-8 years of experience in production ready code development Experience in designing and implementing data pipelines Practical experience with implementing ML solutions on GCP Vertex.AI and/or Databricks Good communication skills Ability to work in team and support others Taking responsibility for tasks and deliverables Great problem-solving skills and critical thinking Fluency in written and spoken English. Nice to have skills & knowledge: Practical experience with other programming languages: PySpark, Scala, R, Java Practical experience with tools like AirFlow, ADF or Kubeflow Good understanding of CI/CD and DevOps concepts, and experience in working with selected tools (preferably GitHub Actions, GitLab or Azure DevOps) Experience in applying and/or defining software engineering best practices Experience productization ML solutions using technologies like Docker/Kubernetes We Offer: Stable employment. On the market since 2008, 1300+ talents currently on board in 7 global sites. 100% remote. Flexibility regarding working hours. Full-time position Comprehensive online onboarding program with a “Buddy” from day 1. Cooperation with top-tier engineers and experts. Internal Gallup Certified Strengths Coach to support your growth. Unlimited access to the Udemy learning platform from day 1. Certificate training programs. Lingarians earn 500+ technology certificates yearly. Upskilling support. Capability development programs, Competency Centers, knowledge sharing sessions, community webinars, 110+ training opportunities yearly. Grow as we grow as a company. 76% of our managers are internal promotions. A diverse, inclusive, and values-driven community. Autonomy to choose the way you work. We trust your ideas. Create our community together. Refer your friends to receive bonuses. Activities to support your well-being and health. Plenty of opportunities to donate to charities and support the environment. Please click on this link to submit your application: https://system.erecruiter.pl/FormTemplates/RecruitmentForm.aspx?WebID=ac709bd295cc4008af7d0a7a0e465818 Show more Show less

Posted 5 days ago

Apply

0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Linkedin logo

As a Digital Marketing Executive at DataFlair Web Services, you will play a crucial role in driving our online presence and increasing our brand visibility. Your expertise in SEO, SEM, social media marketing, Instagram and Facebook marketing will be essential in achieving our marketing goals. Key Responsibilities Develop and implement SEO strategies to improve website rankings and increase organic traffic. Manage and optimize SEM campaigns to drive targeted traffic and improve conversion rates. Create and execute engaging social media campaigns across various platforms to increase brand awareness and engagement. Utilize Instagram marketing techniques to grow followers, increase engagement, and drive traffic to our website. Implement Facebook marketing strategies to reach targeted audiences, increase brand visibility, and drive conversions. Analyze data and metrics to track the performance of marketing campaigns and make data-driven decisions. Stay up-to-date with the latest digital marketing trends and best practices to ensure our strategies remain competitive and effective. If you are a passionate digital marketer with a strong background in SEO and social media marketing, we want you to join our team and help us take DataFlair Web Services to the next level! About Company: DataFlair Web Services is a leading provider of online training in niche technologies such as big data - Hadoop, Spark, Scala, HBase, Kafka, Storm, etc. We aim to reach the masses through our unique pedagogy model, offering self-paced learning and instructor-led learning with personalized guidance, lifetime course access, 24x7 support, live projects, resume and interview preparation, and ready-to-work level learning. Our goal is to provide learners with real-time technical experience through our expert instructors. Show more Show less

Posted 5 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Description In this role you'll be responsible for building machine learning based systems and conduct data analysis that improves the quality of our large geospatial data. You’ll be developing NLP models to extract information, using outlier detection to identifying anomalies and applying data science methods to quantify the quality of our data. You will take part in the development, integration, productionisation and deployment of the models at scale, which would require a good combination of data science and software development. Responsibilities - Development of machine learning models - Building and maintaining software development solutions - Provide insights by applying data science methods - Take ownership of delivering features and improvements on time Must-have Qualifications - 5+ years experience - Senior data scientist preferable with knowledge of NLP - Strong programming skills and extensive experience with Python - Professional experience working with LLMs, transformers and open-source models from HuggingFace - Professional experience working with machine learning and data science, such as classification, feature engineering, clustering, anomaly detection and neural networks - Knowledgeable in classic machine learning algorithms (SVM, Random Forest, Naive Bayes, KNN etc). - Experience using deep learning libraries and platforms, such as PyTorch - Experience with frameworks such as Sklearn, Numpy, Pandas, Polars - Excellent analytical and problem solving skills - Excellent oral and written communication skills Extra Merit Qualifications - Knowledge in at least one of the following: NLP, information retrieval, data mining - Ability to do statistical modeling and building predictive models - Programming skills and experience with Scala and/or Java Show more Show less

Posted 5 days ago

Apply

5.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Linkedin logo

Exp:5+yrs NP: Imm-15 days Rounds: 3 Rounds (Virtual) Mandate Skills: Apache spark, hive, Hadoop, spark, scala, Databricks Job Description The Role Designing and building optimized data pipelines using cutting-edge technologies in a cloud environment to drive analytical insights. Constructing infrastructure for efficient ETL processes from various sources and storage systems. Leading the implementation of algorithms and prototypes to transform raw data into useful information. Architecting, designing, and maintaining database pipeline architectures, ensuring readiness for AI/ML transformations. Creating innovative data validation methods and data analysis tools. Ensuring compliance with data governance and security policies. Interpreting data trends and patterns to establish operational alerts. Developing analytical tools, programs, and reporting mechanisms Conducting complex data analysis and presenting results effectively. Preparing data for prescriptive and predictive modeling. Continuously exploring opportunities to enhance data quality and reliability. Applying strong programming and problem-solving skills to develop scalable solutions. Requirements Experience in the Big Data technologies (Hadoop, Spark, Nifi, Impala) 5+ years of hands-on experience designing, building, deploying, testing, maintaining, monitoring, and owning scalable, resilient, and distributed data pipelines. High proficiency in Scala/Java and Spark for applied large-scale data processing. Expertise with big data technologies, including Spark, Data Lake, and Hive. Show more Show less

Posted 5 days ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Title: Data Engineer – Databricks, Delta Live Tables, Data Pipelines Location: Bhopal / Hyderabad / Pune (On-site) Experience Required: 5+ Years Employment Type: Full-Time Job Summary: We are seeking a skilled and experienced Data Engineer with a strong background in designing and building data pipelines using Databricks and Delta Live Tables. The ideal candidate should have hands-on experience in managing large-scale data engineering workloads and building scalable, reliable data solutions in cloud environments. Key Responsibilities: Design, develop, and manage scalable and efficient data pipelines using Databricks and Delta Live Tables . Work with structured and unstructured data to enable analytics and reporting use cases. Implement data ingestion , transformation , and cleansing processes. Collaborate with Data Architects, Analysts, and Data Scientists to ensure data quality and integrity. Monitor data pipelines and troubleshoot issues to ensure high availability and performance. Optimize queries and data flows to reduce costs and increase efficiency. Ensure best practices in data security, governance, and compliance. Document architecture, processes, and standards. Required Skills: Minimum 5 years of hands-on experience in data engineering . Proficient in Apache Spark , Databricks , Delta Lake , and Delta Live Tables . Strong programming skills in Python or Scala . Experience with cloud platforms such as Azure , AWS , or GCP . Proficient in SQL for data manipulation and analysis. Experience with ETL/ELT pipelines , data wrangling , and workflow orchestration tools (e.g., Airflow, ADF). Understanding of data warehousing , big data ecosystems , and data modeling concepts. Familiarity with CI/CD processes in a data engineering context. Nice to Have: Experience with real-time data processing using tools like Kafka or Kinesis. Familiarity with machine learning model deployment in data pipelines. Experience working in an Agile environment. Show more Show less

Posted 5 days ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Company Description Coders Brain is a global leader in IT services, digital and business solutions that partners with clients to simplify, strengthen, and transform their businesses. The company ensures high levels of certainty and satisfaction through deep industry expertise and a global network of innovation and delivery centers. Job Title: Senior Data Engineer Location: Hyderabad Experience: 6+ Years Employment Type: Full-Time Job Summary: We are looking for a highly skilled Senior Data Engineer to join our Data Engineering team. You will play a key role in designing, implementing, and optimizing robust, scalable data solutions that drive business decisions for our clients. This position involves hands-on development of data pipelines, cloud data platforms, and analytics tools using cutting-edge technologies. Key Responsibilities: Design and build reliable, scalable, and high-performance data pipelines to ingest, transform, and store data from various sources. Develop cloud-based data infrastructure using platforms such as AWS , Azure , or Google Cloud Platform (GCP) . Optimize data processing and storage frameworks for cost efficiency and performance. Ensure high standards for data quality, integrity, and governance across all systems. Collaborate with cross-functional teams including data scientists, analysts, and product managers to translate requirements into technical solutions. Troubleshoot and resolve issues with data pipelines and workflows, ensuring system reliability and availability. Stay current with emerging trends and technologies in big data and cloud ecosystems and recommend improvements accordingly. Required Qualifications: Bachelor’s degree in Computer Science , Software Engineering , or a related field. Minimum 6 years of professional experience in data engineering or a related discipline. Proficiency in Python , Java , or Scala for data engineering tasks. Strong expertise in SQL and hands-on experience with modern data warehouses (e.g., Snowflake, Redshift, BigQuery). In-depth knowledge of big data technologies such as Hadoop , Spark , or Hive . Practical experience with cloud-based data platforms such as AWS (e.g., Glue, EMR) , Azure (e.g., Data Factory, Synapse) , or GCP (e.g., Dataflow, BigQuery) . Excellent analytical, problem-solving, and communication skills. Nice to Have: Experience with containerization and orchestration tools such as Docker and Kubernetes . Familiarity with CI/CD pipelines for data workflows. Knowledge of data governance, security, and compliance best practices. Show more Show less

Posted 5 days ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

About US At Particleblack, we drive innovation through intelligent experimentation with Artificial Intelligence. Our multidisciplinary team—comprising solution architects, data scientists, engineers, product managers, and designers—collaborates with domain experts to deliver cutting-edge R&D solutions tailored to your business. Our ecosystem empowers rapid execution with plug-and-play tools, enabling scalable, AI-powered strategies that fast-track your digital transformation. With a focus on automation and seamless integration, we help you stay ahead—letting you focus on your core, while we accelerate your growth Responsibilities & Qualifications Data Architecture Design: Develop and implement scalable and efficient data architectures for batch and real-time data processing.Design and optimize data lakes, warehouses, and marts to support analytical and operational use cases. ETL/ELT Pipelines: Build and maintain robust ETL/ELT pipelines to extract, transform, and load data from diverse sources.Ensure pipelines are highly performant, secure, and resilient to handle large volumes of structured and semi-structured data. Data Quality and Governance: Establish data quality checks, monitoring systems, and governance practices to ensure the integrity, consistency, and security of data assets. Implement data cataloging and lineage tracking for enterprise-wide data transparency. Collaboration with Teams:Work closely with data scientists and analysts to provide accessible, well-structured datasets for model development and reporting. Partner with software engineering teams to integrate data pipelines into applications and services. Cloud Data Solutions: Architect and deploy cloud-based data solutions using platforms like AWS, Azure, or Google Cloud, leveraging services such as S3, BigQuery, Redshift, or Snowflake. Optimize cloud infrastructure costs while maintaining high performance. Data Automation and Workflow Orchestration: Utilize tools like Apache Airflow, n8n, or similar platforms to automate workflows and schedule recurring data jobs. Develop monitoring systems to proactively detect and resolve pipeline failures. Innovation and Leadership: Research and implement emerging data technologies and methodologies to improve team productivity and system efficiency. Mentor junior engineers, fostering a culture of excellence and innovation.| Required Skills: Experience: 7+ years of overall experience in data engineering roles, with at least 2+ years in a leadership capacity. Proven expertise in designing and deploying large-scale data systems and pipelines. Technical Skills: Proficiency in Python, Java, or Scala for data engineering tasks. Strong SQL skills for querying and optimizing large datasets. Experience with data processing frameworks like Apache Spark, Beam, or Flink. Hands-on experience with ETL tools like Apache NiFi, dbt, or Talend. Experience in pub sub and stream processing using Kafka/Kinesis or the like Cloud Platforms: Expertise in one or more cloud platforms (AWS, Azure, GCP) with a focus on data-related services. Data Modeling: Strong understanding of data modeling techniques (dimensional modeling, star/snowflake schemas). Collaboration: Proven ability to work with cross-functional teams and translate business requirements into technical solutions. Preferred Skills: Familiarity with data visualization tools like Tableau or Power BI to support reporting teams. Knowledge of MLOps pipelines and collaboration with data scientists. Show more Show less

Posted 6 days ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and contribute to the overall data strategy, ensuring that the data architecture aligns with business objectives and supports analytical needs. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Develop and optimize data pipelines to enhance data processing efficiency. - Collaborate with stakeholders to gather requirements and translate them into technical specifications. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Strong understanding of data modeling and database design principles. - Experience with ETL tools and data integration techniques. - Familiarity with cloud platforms and services related to data storage and processing. - Knowledge of programming languages such as Python or Scala for data manipulation. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Chennai office. - A 15 years full time education is required. 15 years full time education Show more Show less

Posted 6 days ago

Apply

0.0 years

0 Lacs

Thiruvananthapuram, Kerala

On-site

Indeed logo

Data Science and AI Developer **Job Description:** We are seeking a highly skilled and motivated Data Science and AI Developer to join our dynamic team. As a Data Science and AI Developer, you will be responsible for leveraging cutting-edge technologies to develop innovative solutions that drive business insights and enhance decision-making processes. **Key Responsibilities:** 1. Develop and deploy machine learning models for predictive analytics, classification, clustering, and anomaly detection. 2. Design and implement algorithms for data mining, pattern recognition, and natural language processing. 3. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. 4. Utilize advanced statistical techniques to analyze complex datasets and extract actionable insights. 5. Implement scalable data pipelines for data ingestion, preprocessing, feature engineering, and model training. 6. Stay updated with the latest advancements in data science, machine learning, and artificial intelligence research. 7. Optimize model performance and scalability through experimentation and iteration. 8. Communicate findings and results to stakeholders through reports, presentations, and visualizations. 9. Ensure compliance with data privacy regulations and best practices in data handling and security. 10. Mentor junior team members and provide technical guidance and support. **Requirements:** 1. Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, or a related field. 2. Proven experience in developing and deploying machine learning models in production environments. 3. Proficiency in programming languages such as Python, R, or Scala, with strong software engineering skills. 4. Hands-on experience with machine learning libraries/frameworks such as TensorFlow, PyTorch, Scikit-learn, or Spark MLlib. 5. Solid understanding of data structures, algorithms, and computer science fundamentals. 6. Excellent problem-solving skills and the ability to think creatively to overcome challenges. 7. Strong communication and interpersonal skills, with the ability to work effectively in a collaborative team environment. 8. Certification in Data Science, Machine Learning, or Artificial Intelligence (e.g., Coursera, edX, Udacity, etc.). 9. Experience with cloud platforms such as AWS, Azure, or Google Cloud is a plus. 10. Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka) is an advantage. Data Manipulation and Analysis : NumPy, Pandas Data Visualization : Matplotlib, Seaborn, Power BI Machine Learning Libraries : Scikit-learn, TensorFlow, Keras Statistical Analysis : SciPy Web Scrapping : Scrapy IDE : PyCharm, Google Colab HTML/CSS/JavaScript/React JS Proficiency in these core web development technologies is a must. Python Django Expertise: In-depth knowledge of e-commerce functionalities or deep Python Django knowledge. Theming: Proven experience in designing and implementing custom themes for Python websites. Responsive Design: Strong understanding of responsive design principles and the ability to create visually appealing and user-friendly interfaces for various devices. Problem Solving: Excellent problem-solving skills with the ability to troubleshoot and resolve issues independently. Collaboration: Ability to work closely with cross-functional teams, including marketing and design, to bring creative visions to life. interns must know about how to connect front end with datascience Also must Know to connect datascience to frontend **Benefits:** - Competitive salary package - Flexible working hours - Opportunities for career growth and professional development - Dynamic and innovative work environment Job Type: Full-time Pay: ₹8,000.00 - ₹12,000.00 per month Schedule: Day shift Ability to commute/relocate: Thiruvananthapuram, Kerala: Reliably commute or planning to relocate before starting work (Preferred) Work Location: In person

Posted 6 days ago

Apply

3.0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Linkedin logo

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and contribute to the overall data strategy, ensuring that the data architecture aligns with business objectives and supports analytical needs. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Assist in the design and implementation of data architecture to support data initiatives. - Monitor and optimize data pipelines for performance and reliability. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Strong understanding of data modeling and database design principles. - Experience with ETL tools and processes. - Familiarity with cloud platforms and services related to data storage and processing. - Knowledge of programming languages such as Python or Scala for data manipulation. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Indore office. - A 15 years full time education is required. 15 years full time education Show more Show less

Posted 6 days ago

Apply

0.0 - 8.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

Senior Data Engineer (Contract) Location: Bengaluru, Karnataka, India About the Role: We're looking for an experienced Senior Data Engineer (6-8 years) to join our data team. You'll be key in building and maintaining our data systems on AWS. You'll use your strong skills in big data tools and cloud technology to help our analytics team get valuable insights from our data. You'll be in charge of the whole process of our data pipelines, making sure the data is good, reliable, and fast. What You'll Do: Design and build efficient data pipelines using Spark / PySpark / Scala . Manage complex data processes with Airflow , creating and fixing any issues with the workflows ( DAGs ). Clean, transform, and prepare data for analysis. Use Python for data tasks, automation, and building tools. Work with AWS services like S3, Redshift, EMR, Glue, and Athena to manage our data infrastructure. Collaborate closely with the Analytics team to understand what data they need and provide solutions. Help develop and maintain our Node.js backend, using Typescript , for data services. Use YAML to manage the settings for our data tools. Set up and manage automated deployment processes ( CI/CD ) using GitHub Actions . Monitor and fix problems in our data pipelines to keep them running smoothly. Implement checks to ensure our data is accurate and consistent. Help design and build data warehouses and data lakes. Use SQL extensively to query and work with data in different systems. Work with streaming data using technologies like Kafka for real-time data processing. Stay updated on the latest data engineering technologies. Guide and mentor junior data engineers. Help create data management rules and procedures. What You'll Need: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 6-8 years of experience as a Data Engineer. Strong skills in Spark and Scala for handling large amounts of data. Good experience with Airflow for managing data workflows and understanding DAGs . Solid understanding of how to transform and prepare data. Strong programming skills in Python for data tasks and automation.. Proven experience working with AWS cloud services (S3, Redshift, EMR, Glue, IAM, EC2, and Athena ). Experience building data solutions for Analytics teams. Familiarity with Node.js for backend development. Experience with Typescript for backend development is a plus. Experience using YAML for configuration management. Hands-on experience with GitHub Actions for automated deployment ( CI/CD ). Good understanding of data warehousing concepts. Strong database skills - OLAP/OLTP Excellent command of SQL for data querying and manipulation. Experience with stream processing using Kafka or similar technologies. Excellent problem-solving, analytical, and communication skills. Ability to work well independently and as part of a team. Bonus Points: Familiarity with data lake technologies (e.g., Delta Lake, Apache Iceberg). Experience with other stream processing technologies (e.g., Flink, Kinesis). Knowledge of data management, data quality, statistics and data governance frameworks. Experience with tools for managing infrastructure as code (e.g., Terraform). Familiarity with container technologies (e.g., Docker, Kubernetes). Experience with monitoring and logging tools (e.g., Prometheus, Grafana).

Posted 6 days ago

Apply

0.0 - 3.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

Category: Software Development/ Engineering Main location: India, Karnataka, Bangalore Position ID: J0525-0333 Employment Type: Full Time Position Description: Job Title: People Manager Position: Manager Consulting Experience: 15+ Years Category: Software Development/ Engineering Main location: All CGI Locations + Noida Position ID: J0525-0333 Employment Type: Full Time Job Description : Technologies Used: Odoo Platform v15+ Based Development. Experience with Odoo development and customization. Odoo User base (Logged-in users) > 1000 Users . Odoo on Kubernetes (Microservices Based Architecture) with DevOps understanding. Knowledge of Odoo modules, architecture, and APIs. Ability to integrate Odoo with other systems and data sources. Capable of creating custom modules. Scale Odoo deployments for a large number of users and transactions. Programming. Languages: Proficiency in Python is essential. Experience with other programming languages (e.g., Java, Scala) is a plus. Data Analysis and Reporting: Ability to analyse and interpret complex data sets. Your future duties and responsibilities: Team Leadership: Manage and mentor a team of 40+ members, fostering a collaborative and productive work environment. Project Management: Oversee the planning, execution, and delivery of projects, ensuring they meet quality standards and deadlines. Technical Guidance: Provide technical direction and support in Odoo development, customization, and integration. DevSecOps Management: Ensure efficient containerization, CI/CD processes, and Kubernetes cluster management. Data Management: Guide the team in data analysis, visualization, and processing large datasets using tools like Superset, Cassandra, and Presto. Agile Methodologies: Implement and promote SCRUM and Agile methodologies within the team. Stakeholder Communication: Maintain clear and effective communication with stakeholders, ensuring alignment on project goals and progress. Performance Monitoring: Track team performance, provide feedback, and implement strategies for continuous improvement. Revenue Management: Develop and implement strategies to optimize revenue generation and financial performance. Client Engagement: Foster strong relationships with clients, ensuring their needs are met and expectations exceeded. New Business Generation: Identify and pursue new business opportunities to drive growth and expand the company's client base. Required qualifications to be successful in this role: Bachelor's degree in Computer Science, Information Technology, Business Administration, or related field. Overall experience of 15+ years in relevant fields. Minimum of 3 years in a people manager role. Proven experience in managing large teams and complex projects. Strong leadership and interpersonal skills. Skills: English ERP System CSB Java Postgre SQL What you can expect from us: Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.

Posted 6 days ago

Apply

7.5 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 7.5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Mentor junior team members to enhance their skills and knowledge in data engineering. - Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Experience with data pipeline orchestration tools such as Apache Airflow or similar. - Strong understanding of ETL processes and data warehousing concepts. - Familiarity with cloud platforms like AWS, Azure, or Google Cloud. - Knowledge of programming languages such as Python or Scala for data manipulation. Additional Information: - The candidate should have minimum 7.5 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Pune office. - A 15 years full time education is required. 15 years full time education Show more Show less

Posted 6 days ago

Apply

5.0 years

0 Lacs

Navi Mumbai, Maharashtra, India

On-site

Linkedin logo

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Mentor junior team members to enhance their skills and knowledge in data engineering. - Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Experience with data pipeline orchestration tools. - Strong understanding of ETL processes and data warehousing concepts. - Familiarity with data quality frameworks and best practices. - Knowledge of programming languages such as Python or Scala. Additional Information: - The candidate should have minimum 5 years of experience in Databricks Unified Data Analytics Platform. - This position is based in Mumbai. - A 15 years full time education is required. 15 years full time education Show more Show less

Posted 6 days ago

Apply

Exploring Scala Jobs in India

Scala is a popular programming language that is widely used in India, especially in the tech industry. Job seekers looking for opportunities in Scala can find a variety of roles across different cities in the country. In this article, we will dive into the Scala job market in India and provide valuable insights for job seekers.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Chennai
  5. Mumbai

These cities are known for their thriving tech ecosystem and have a high demand for Scala professionals.

Average Salary Range

The salary range for Scala professionals in India varies based on experience levels. Entry-level Scala developers can expect to earn around INR 6-8 lakhs per annum, while experienced professionals with 5+ years of experience can earn upwards of INR 15 lakhs per annum.

Career Path

In the Scala job market, a typical career path may look like: - Junior Developer - Scala Developer - Senior Developer - Tech Lead

As professionals gain more experience and expertise in Scala, they can progress to higher roles with increased responsibilities.

Related Skills

In addition to Scala expertise, employers often look for candidates with the following skills: - Java - Spark - Akka - Play Framework - Functional programming concepts

Having a good understanding of these related skills can enhance a candidate's profile and increase their chances of landing a Scala job.

Interview Questions

Here are 25 interview questions that you may encounter when applying for Scala roles:

  • What is Scala and why is it used? (basic)
  • Explain the difference between val and var in Scala. (basic)
  • What is pattern matching in Scala? (medium)
  • What are higher-order functions in Scala? (medium)
  • How does Scala support functional programming? (medium)
  • What is a case class in Scala? (basic)
  • Explain the concept of currying in Scala. (advanced)
  • What is the difference between map and flatMap in Scala? (medium)
  • How does Scala handle null values? (medium)
  • What is a trait in Scala and how is it different from an abstract class? (medium)
  • Explain the concept of implicits in Scala. (advanced)
  • What is the Akka toolkit and how is it used in Scala? (medium)
  • How does Scala handle concurrency? (advanced)
  • Explain the concept of lazy evaluation in Scala. (advanced)
  • What is the difference between List and Seq in Scala? (medium)
  • How does Scala handle exceptions? (medium)
  • What are Futures in Scala and how are they used for asynchronous programming? (advanced)
  • Explain the concept of type inference in Scala. (medium)
  • What is the difference between object and class in Scala? (basic)
  • How can you create a Singleton object in Scala? (basic)
  • What is a higher-kinded type in Scala? (advanced)
  • Explain the concept of for-comprehensions in Scala. (medium)
  • How does Scala support immutability? (medium)
  • What are the advantages of using Scala over Java? (basic)
  • How do you implement pattern matching in Scala? (medium)

Closing Remark

As you explore Scala jobs in India, remember to showcase your expertise in Scala and related skills during interviews. Prepare well, stay confident, and you'll be on your way to a successful career in Scala. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies