Home
Jobs

2810 Scala Jobs - Page 37

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

At Capgemini Engineering, the world leader in engineering services, we bring together a global team of engineers, scientists, and architects to help the world’s most innovative companies unleash their potential. From autonomous cars to life-saving robots, our digital and software technology experts think outside the box as they provide unique R&D and engineering services across all industries. Join us for a career full of opportunities. Where you can make a difference. Where no two days are the same. Your Role As a senior software engineer with Capgemini, you will have 6 + years of experience in Azure technology with strong project track record In this role you will play a key role in: Strong customer orientation, decision making, problem solving, communication and presentation skills Very good judgement skills and ability to shape compelling solutions and solve unstructured problems with assumptions Very good collaboration skills and ability to interact with multi-cultural and multi-functional teams spread across geographies Strong executive presence and entrepreneurial spirit Superb leadership and team building skills with ability to build consensus and achieve goals through collaboration rather than direct line authority Your Profile Experience with Azure Data Bricks, Data Factory Experience with Azure Data components such as Azure SQL Database, Azure SQL Warehouse, SYNAPSE Analytics Experience in Python/Pyspark/Scala/Hive Programming Experience with Azure Databricks/ADB is must have Experience with building CI/CD pipelines in Data environments Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, generative AI, cloud and data, combined with its deep industry expertise and partner ecosystem. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Responsibilities: Develop and Maintain Kafka Solutions: Design, implement, and manage Kafka -based data pipelines to ensure efficient data flow and processing. Optimize Performance: Monitor and optimize Kafka clusters for high throughput and low latency. Integration: Integrate Kafka with various systems and tools, ensuring seamless data flow. Troubleshooting: Identify and resolve issues related to Kafka and data processing. Documentation: Create and maintain documentation for Kafka configurations and processes. Security and Compliance: Ensure data security and compliance with industry standards. Skills Required: Technical Proficiency: Strong understanding of Apache Kafka architecture, components, and ecosystem tools such as Kafka Connect and Kafka Streams. Programming Skills: Proficiency in Java, Scala, or Python. Distributed Systems: Experience with distributed messaging systems and real-time data processing. Microservices Architecture: Understanding of microservices and event-driven systems. Data Serialization: Knowledge of data serialization formats like Avro, Protobuf, or JSON. Cloud Platforms: Familiarity with cloud platforms (AWS, Azure, Google Cloud) and their managed Kafka services. CI/CD: Experience with CI/CD pipelines and version control tools like Git. Monitoring Tools: Knowledge of monitoring and logging tools such as Prometheus and Grafana. Show more Show less

Posted 1 week ago

Apply

55.0 years

0 Lacs

Gurgaon, Haryana, India

Remote

Linkedin logo

At Capgemini Invent, we believe difference drives change. As inventive transformation consultants, we blend our strategic, creative and scientific capabilities, collaborating closely with clients to deliver cutting-edge solutions. Join us to drive transformation tailored to our client's challenges of today and tomorrow. Informed and validated by science and data. Superpowered by creativity and design. All underpinned by technology created with purpose. Your role As a Senior Data Scientist, you are expected to develop and implement Artificial Intelligence based solutions across various disciplines for the Intelligent Industry vertical of Capgemini Invent. You are expected to work as an individual contributor or along with a team to help design and develop ML/NLP models as per the requirement. You will work closely with the Product Owner, Systems Architect and other key stakeholders right from conceptualization till the implementation of the project. You should take ownership while understanding the client requirement, the data to be used, security & privacy needs and the infrastructure to be used for the development and implementation. The candidate will be responsible for executing data science projects independently to deliver business outcomes and is expected to demonstrate domain expertise, develop, and execute program plans and proactively solicit feedback from stakeholders to identify improvement actions. This role requires a strong technical background, excellent problem-solving skills, and the ability to work collaboratively with stakeholders from different functional and business teams. The role also requires the candidate to collaborate on ML asset creation and eager to learn and impart trainings to fellow data science professionals. We expect thought leadership from the candidate, especially on proposing to build a ML/NLP asset based on expected industry requirements. Experience in building Industry specific (e.g. Manufacturing, R&D, Supply Chain, Life Sciences etc), production ready AI Models using microservices and web-services is a plus. Programming Languages – Python – NumPy, SciPy, Pandas, MatPlotLib, Seaborne Databases – RDBMS (MySQL, Oracle etc.), NoSQL Stores (HBase, Cassandra etc.) ML/DL Frameworks – SciKitLearn, TensorFlow (Keras), PyTorch, Big data ML Frameworks - Spark (Spark-ML, Graph-X), H2O Cloud – Azure/AWS/GCP Your Profile Predictive and Prescriptive modelling using Statistical and Machine Learning algorithms including but not limited to Time Series, Regression, Trees, Ensembles, Neural-Nets (Deep & Shallow – CNN, LSTM, Transformers etc.). Experience with open-source OCR engines like Tesseract, Speech recognition, Computer Vision, face recognition, emotion detection etc. is a plus. Unsupervised learning – Market Basket Analysis, Collaborative Filtering, Dimensionality Reduction, good understanding of common matrix decomposition approaches like SVD. Various Clustering approaches – Hierarchical, Centroid-based, Density-based, Distribution-based, Graph-based clustering like Spectral. NLP – Information Extraction, Similarity Matching, Sentiment Analysis, Text Clustering, Semantic Analysis, Document Summarization, Context Mapping/Understanding, Intent Classification, Word Embeddings, Vector Space Models, experience with libraries like NLTK, Spacy, Stanford Core-NLP is a plus. Usage of Transformers for NLP and experience with LLMs like (ChatGPT, Llama) and usage of RAGs (vector stores like LangChain & LangGraps), building Agentic AI applications. Model Deployment – ML pipeline formation, data security and scrutiny check and ML-Ops for productionizing a built model on-premises and on cloud. Required Qualifications: Master’s degree in a quantitative field such as Mathematics, Statistics, Machine Learning, Computer Science or Engineering or a bachelor’s degree with relevant experience. Good experience in programming with languages such as Python/Java/Scala, SQL and experience with data visualization tools like Tableau or Power BI. Preferred Experience: Experienced in Agile way of working, manage team effort and track through JIRA Experience in Proposal, RFP, RFQ and pitch creations and delivery to the big forum. Experience in POC, MVP, PoV and assets creations with innovative use cases Experience working in a consulting environment is highly desirable. Presupposition: High Impact client communication The job may also entail sitting as well as working at a computer for extended periods of time. Candidates should be able to effectively communicate by telephone, email, and face to face. What You Will Love About Working Here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, generative AI, cloud and data, combined with its deep industry expertise and partner ecosystem. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

India

On-site

Linkedin logo

Location: Bangalore, Chennai, Kolkata, Pune and Gurgaon Hybrid Primary Skills: Snowflake, SQL, DWH, Power BI, ETL and Informatica. We are seeking a skilled Snowflake Developer with a strong background in Data Warehousing (DWH), SQL, Informatica, Power BI, and related tools to join our Data Engineering team. The ideal candidate will have 5+ years of experience in designing, developing, and maintaining data pipelines, integrating data across multiple platforms, and optimizing large-scale data architectures. This is an exciting opportunity to work with cutting-edge technologies in a collaborative environment and help build scalable, high-performance data solutions. Key Responsibilities: ● Minimum of 5+ years of hands-on experience in Data Engineering, with a focus on Data Warehousing, Business Intelligence, and related technologies. ● Data Integration & Pipeline Development: Develop and maintain data pipelines using Snowflake, Fivetran, and DBT for efficient ELT processes (Extract, Load, Transform) across various data sources. ● SQL Query Development & Optimization: Write complex, scalable SQL queries, including stored procedures, to support data transformation, reporting, and analysis. ● Data Modeling & ELT Implementation: Implement advanced data modeling techniques, such as Slowly Changing Dimensions (SCD Type-2), using DBT. Design and optimize high-performance data architectures. ● Business Requirement Analysis: Collaborate with business stakeholders to understand data needs and translate business requirements into technical solutions. ● Troubleshooting & Data Quality: Perform root cause analysis on data-related issues, ensuring effective resolution and maintaining high data quality standards. ● Collaboration & Documentation: Work closely with cross-functional teams to integrate data solutions. Create and maintain clear documentation for data processes, data models, and pipelines. Skills & Qualifications: ● Expertise in Snowflake for data warehousing and ELT processes. ● Strong proficiency in SQL for relational databases and writing complex queries. ● Experience with Informatica PowerCenter for data integration and ETL development. ● Experience using Power BI for data visualization and business intelligence reporting. ● Experience with Fivetran for automated ELT pipelines. ● Familiarity with Sigma Computing, Tableau, Oracle, and DBT. ● Strong data analysis, requirement gathering, and mapping skills. ● Familiarity with cloud services such as Azure (RDBMS, Data Bricks, ADF), with AWS or GCP ● Experience with workflow management tools such as Airflow, Azkaban, or Luigi. ● Proficiency in Python for data processing (other languages like Java, Scala are a plus). Education- Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or a related Show more Show less

Posted 1 week ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Role: Consultant - Generative AI Location: Gurgaon We are seeking a highly skilled Generative AI Engineer to join the team. The Generative AI Engineer will play a pivotal role in designing, coding, and deploying advanced AI solutions using state-of-the-art technologies such as Databricks, AI Fabric, Azure, and Snowflake. This role requires a deep understanding of AI/ML frameworks and cloud-based environments, focusing on building scalable, high-performance AI solutions that drive value for global network. Key responsibilities include: 1. AI Solution Development: • Design, develop, and deploy Generative AI models and solutions that address complex business challenges across advisory, tax, and audit services. • Leverage platforms such as Databricks for data engineering and AI model development, AI Fabric for orchestration and deployment, and Snowflake for scalable data management. •Utilize Azure cloud services to implement and scale AI solutions, ensuring high availability, performance, and security. 2. Technical Leadership and Collaboration : • Collaborate with data scientists, AI architects, and software engineers to define technical requirements and develop end-to-end AI solutions. • Lead the development of AI models from experimentation and prototyping through to production, ensuring alignment with business objectives. • Work closely with cross-functional teams to integrate AI solutions into existing workflows and systems, optimizing for efficiency and usability. 3. Coding and Implementation: • Write high-quality, maintainable code using Python, Scala, or similar programming languages, focusing on AI/ML libraries and frameworks. • Develop and optimize data pipelines using Databricks, ensuring seamless data flow from ingestion to AI model training and inference. • Implement AI solutions using AI Fabric, focusing on model orchestration, deployment, and monitoring within a cloud environment. 4. Data Management and Integration: • Design and manage data architectures using Snowflake, ensuring data is organized, accessible, and secure for AI model training and deployment. • Integrate data from various sources, transforming and preparing it for AI model development, ensuring data quality and integrity. • Work with large datasets, applying best practices for data engineering, ETL processes, and real-time data processing 5. Cloud & Infrastructure Management: • Deploy AI models and services in Azure, utilizing cloud-native tools and best practices to ensure scalability, reliability, and security. • Implement CI/CD pipelines to automate the deployment and management of AI models, ensuring rapid iteration and continuous delivery. • Optimize infrastructure for AI workloads, balancing performance, cost, and resource utilization. 6. Performance Tuning and Optimization: • Continuously monitor and optimize AI models and data pipelines to improve performance, accuracy, and scalability. • Implement strategies for model fine-tuning, hyperparameter optimization, and feature engineering to enhance AI solution effectiveness. • Troubleshoot and resolve technical issues related to AI model deployment, data processing, and cloud infrastructure. 7. Innovation and Continuous Improvement: • Stay updated with the latest advancements in Gen AI, cloud computing, and big data technologies, applying new techniques to improve solutions. • Experiment with emerging technologies and frameworks to drive innovation. • Contribute to the development of AI best practices, coding standards, and technical documentation to ensure consistency and quality across projects Experience Required: 2+ years of experience in AI, Machine Learning, or related fields, with hands-on experience in developing and deploying AI solutions. • Proven experience with AI frameworks such as TensorFlow, PyTorch, and experience in working with Databricks, AI Fabric, and Snowflake. • Extensive experience with Azure cloud services, including AI and data services, and a strong background in cloud-native development. • Expertise in coding with Python, Scala, or similar languages, with a focus on AI/ML libraries and big data processing. Proficiency in designing and coding AI models, data pipelines, and cloud-based solutions. • Strong understanding of AI/ML algorithms, data engineering, and model deployment strategies. • Experience with cloud infrastructure management, particularly in Azure, and the ability to optimize AI workloads for performance and cost. • Excellent problem-solving skills and the ability to work collaboratively in a cross-functional team environment. • Strong communication skills, with the ability to articulate complex technical concepts to technical and non-technical stakeholders. Please share your resume at shikha@tdnewton.com Show more Show less

Posted 1 week ago

Apply

2.0 - 7.0 years

4 - 9 Lacs

Hyderabad

Work from Office

Naukri logo

Overview PepsiCo operates in an environment undergoing immense and rapid change. Big-data and digital technologies are driving business transformation that is unlocking new capabilities and business innovations in areas like eCommerce, mobile experiences and IoT. The key to winning in these areas is being able to leverage enterprise data foundations built on PepsiCo s global business scale to enable business insights, advanced analytics, and new product development. PepsiCo s Data Management and Operations team is tasked with the responsibility of developing quality data collection processes, maintaining the integrity of our data foundations, and enabling business leaders and data scientists across the company to have rapid access to the data they need for decision-making and innovation. Maintain a predictable, transparent, global operating rhythm that ensures always-on access to high-quality data for stakeholders across the company. Responsible for day-to-day data collection, transportation, maintenance/ curation, and access to the PepsiCo corporate data asset Work cross-functionally across the enterprise to centralize data and standardize it for use by business, data science or other stakeholders. Increase awareness about available data and democratize access to it across the company . As a data enginee r , you will be the key technical expert building PepsiCo's data product s to drive a strong vision. You'll be empowered to create data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data engineering team, you will help developing very large and complex data applications into public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics . You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premises data sources as well as cloud and remote systems. Responsibilities Act as a subject matter expert across different digital projects. Oversee work with internal clients and external partners to structure and store data into unified taxonomies and link them together with standard identifiers. Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products. Build and own the automation and monitoring frameworks that captures metrics and operational KPIs for data pipeline quality and performance. Responsible for implementing best practices around systems integration, security, performance, and data management. Empower the business by creating value through the increased adoption of data, data science and business intelligence landscape. Collaborate with internal clients (data science and product teams) to drive solutioning and POC discussions. Evolve the architectural capabilities and maturity of the data platform by engaging with enterprise architects and strategic internal and external partners. Develop and optimize procedures to productionalize data science models. Define and manage SLA s for data products and processes running in production. Support large-scale experimentation done by data scientists. Prototype new approaches and build solutions at scale. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Create and audit reusable packages or libraries. Qualifications 4+ years of overall technology experience that includes at least 3+ years of hands-on software development, data engineering, and systems architecture. 3+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 3+ years of experience in SQL optimization and performance tuning, and development experience in programming languages like Python, PySpark, Scala etc.). 2+ years in cloud data engineering experience in Azure. Fluent with Azure cloud services. Azure Certification is a plus. Experience in Azure Log Analytics Experience with integration of multi cloud services with on-premises technologies. Experience with data modelling, data warehousing, and building high-volume ETL/ELT pipelines. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience building/operatinghighly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse or Snowflake. Experience with Azure Data Factory, Azure Databricks and Azure Machine learning tools. Experience with Statistical/ML techniques is a plus. Experience with building solutions in the retail or in the supply chain space is a plus. Experience with version control systems like Github and deployment & CI tools. Working knowledge of agile development, including DevOps and DataOps concepts. B Tech/ BA/ BS in Computer Science, Math, Physics, or other technical fields. Skills, Abilities, Knowledge: Excellent communication skills, both verbal and written, along with the ability to influence and demonstrate confidence in communications with senior level management. Strong change manager. Comfortable with change, especially that which arises through company growth. Ability to understand and translate business requirements into data and technical requirements. High degree of organization and ability to manage multiple, competing projects and priorities simultaneously. Positive and flexible attitude to enable adjusting to different needs in an ever-changing environment. Strong organizational and interpersonal skills; comfortable managing trade-offs. Foster a team culture of accountability, communication, and self-management. Proactively drives impact and engagement while bringing others along. Consistently attain/exceed individual and team goals. 4+ years of overall technology experience that includes at least 3+ years of hands-on software development, data engineering, and systems architecture. 3+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 3+ years of experience in SQL optimization and performance tuning, and development experience in programming languages like Python, PySpark, Scala etc.). 2+ years in cloud data engineering experience in Azure. Fluent with Azure cloud services. Azure Certification is a plus. Experience in Azure Log Analytics Experience with integration of multi cloud services with on-premises technologies. Experience with data modelling, data warehousing, and building high-volume ETL/ELT pipelines. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience building/operatinghighly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse or Snowflake. Experience with Azure Data Factory, Azure Databricks and Azure Machine learning tools. Experience with Statistical/ML techniques is a plus. Experience with building solutions in the retail or in the supply chain space is a plus. Experience with version control systems like Github and deployment & CI tools. Working knowledge of agile development, including DevOps and DataOps concepts. B Tech/BA/BS in Computer Science, Math, Physics, or other technical fields.

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About Gameskraft - Established in 2017, Gameskraft has become one of India’s fastest-growing companies. We are building the world's most-loved online gaming ecosystem - one game at a time. Started by a group of passionate gamers, we have grown from a small team of five members to a large family of 600+ Krafters, working out of our office in Prestige Tech Park, Bangalore. Our short-term success lies in the fact that we strive to focus on building a safe, secure,and responsible gaming environment for everyone. Our vision is to create unmatched experiences every day, everywhere. We set the highest benchmarks in the industry in terms of design, technology, and intuitiveness. We are also the industry’s only ISO 27001and ISO 9001 certified gaming company. About the role - We are hiring a Senior Data Engineer at Gameskraft, one of India's fastest-growing gaming companies, to build and scale a robust data platform. The role involves designing and optimizing data pipelines, developing scalable infrastructure, and ensuring seamless data accessibility for business insights. Key Responsibilities: Building and optimizing big data pipelines, architectures, and datasets to handle large-scale data. Enhancing infrastructure for scalability, automation, and data delivery improvements. Developing real-time and batch processing solutions using Kafka, Spark, and Airflow. Ensuring data governance, security compliance, and high availability. Collaborating with product, business, and analytics teams to support data needs. Tech Stack: Big Data Tools: Spark, Kafka, Databricks (Delta Tables), ScyllaDB, Redshift Data Pipelines & Workflow: Airflow, EMR, Glue, Athena Programming: Java, Scala, Python Cloud & Storage: AWS Databases: SQL, NoSQL (ScyllaDB, OpenSearch) Backend: Spring Boot What we expect you will bring to the table: 1. Cutting-Edge Technology & Scale At Gameskraft, you will be working on some of the most advanced big data technologies, including Databricks Delta Tables, ScyllaDB, Spark, Kafka, Airflow, and Spring Boot. Our systems handle billions of data points daily, ensuring real-time analytics and high-scale performance. If you’re passionate about big data, real-time streaming, and cloud computing, this role offers the perfect challenge. 2. Ownership & Impact Unlike rigid corporate structures, Gameskraft gives engineers complete freedom and ownership to design, build, and optimize large-scale data pipelines. Your work directly impacts business decisions, game fairness, and player experience, ensuring data is actionable and insightful. 3. High-Growth, Fast-Paced Environment We are one of India’s fastest-growing gaming companies, scaling rapidly since 2017. You will be part of a dynamic team that moves fast, innovates continuously, and disrupts the industry with cutting-edge solutions. 4. Strong Engineering Culture We value technical excellence, continuous learning, and deep problem-solving. We encourage engineers to experiment, contribute, and grow, making this an ideal place for those who love tackling complex data engineering challenges. Why Join Gameskraft? Work on high-scale, real-time data processing challenges. Own end-to-end design and implementation of data pipelines. Collaborate with top-tier engineers and data scientists. Enjoy a fast-growing and financially stable company. Freedom to innovate and contribute at all levels. Work Culture A true startup culture - young, fast paced, where you are driven by personal ownership of solving challenges that help you grow fast Focus on innovation, data orientation, being results driven, taking on big goals, and adapting fast A high performance, meritocratic environment, where we share ideas, debate and grow together with each new product Massive and direct impact on the work you do. Growth through solving dynamic challenges Leveraging technology & analytics to solve large scale challenges Working with cross functional teams to create great product and take them to market Rub shoulders with some of the brightest & most passionate people in the gaming & consumer internet industry Compensation & Benefits Attractive compensation and ESOP packages INR 5 Lakh medical insurance cover for yourself and your family Fair & transparent performance appraisals An attractive Car Lease policy Relocation benefits A vibrant office space with fully stocked pantries. And your lunch is on us! Show more Show less

Posted 1 week ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Responsibilities: Evaluate and source appropriate cloud infrastructure solutions for machine learning needs, ensuring cost-effectiveness and scalability based on project requirements. Automate and manage the deployment of machine learning models into production environments, ensuring version control for models and datasets using tools like Docker and Kubernetes. Set up monitoring tools to track model performance and data drift, conduct regular maintenance, and implement updates for production models. Work closely with data scientists, software engineers, and stakeholders to align on project goals, facilitate knowledge sharing, and communicate findings and updates to cross-functional teams. Design, implement, and maintain scalable ML infrastructure, optimizing cloud and on-premise resources for training and inference. Document ML processes, pipelines, and best practices while preparing reports on model performance, resource utilization, and system issues. Provide training and support for team members on ML Ops tools and methodologies, and stay updated on industry trends and emerging technologies. Diagnose and resolve issues related to model performance, infrastructure, and data quality, implementing solutions to enhance model robustness and reliability. Education, Technical Skills & Other Critical Requirement: 10+ years of relevant experience in AI/ analytics product & solution delivery Bachelor’s/master’s degree in an information technology/computer science/ Engineering or equivalent fields experience. Proficiency in frameworks such as TensorFlow, PyTorch, or Scikit-learn. Strong skills in Python and/or R; familiarity with Java, Scala, or Go is a plus. Experience with cloud services such as AWS, Azure, or Google Cloud Platform, particularly in ML services (e.g., AWS SageMaker, Azure ML). CI/CD tools (e.g., Jenkins, GitLab CI), containerization (e.g., Docker), and orchestration (e.g., Kubernetes). Experience with databases (SQL and NoSQL), data pipelines, ETL processes, ML pipeline orchestration (Airflow) Familiarity with monitoring and logging tools such as Prometheus, Grafana, or ELK stack. Proficient in using Git for version control. Strong analytical and troubleshooting abilities to diagnose and resolve issues effectively. Good communication skills for working with cross-functional teams and conveying technical concepts to non-technical stakeholders. Ability to manage multiple projects and prioritize tasks in a fast-paced environment. Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

India

On-site

Linkedin logo

Job Title: Data Analyst (Python +Pyspark) About Us “Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount . With our presence across 32 cities across globe, we support 100+ clients across banking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO? You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. Job Description Role: Data Analyst / Senior Data Analyst Location : Bangalore/ Pune Responsibilities Define and obtain source data required to successfully deliver insights and use cases Determine the data mapping required to join multiple data sets together across multiple sources Create methods to highlight and report data inconsistencies, allowing users to review and provide feedback on Propose suitable data migration sets to the relevant stakeholders Assist teams with processing the data migration sets as required Assist with the planning, tracking and coordination of the data migration team and with the migration run-book and the scope for each customer Role Requirements Strong Data Analyst with Financial Services experience, Knowledge of and experience using data models and data dictionaries in a Banking and Financial Markets context "Knowledge of one or more of the following domains (including market data vendors): Party/Client Trade Settlements Payments Instrument and pricing Market and/or Credit Risk" Demonstrate a continual desire to implement “strategic” or “optimal” solutions and where possible, avoid workarounds or short term tactical solutions Working with stakeholders to ensure that negative customer and business impacts are avoided Manage stakeholder expectations and ensure that robust communication and escalation mechanisms are in place across the project portfolio Good understanding of the control requirement surrounding data handling Experience/Skillset Must have - Excellent analytical skills and commercial acumen, Minimum 4+ years of experience with Python and Pyspark. Good understanding of the control requirements surrounding data handling Experience of big data programmes preferable Strong verbal and written communication skills Strong self-starter with strong change delivery skills who enjoys the challenge of delivering change within tight deadlines Ability to manage multiple priorities Business analysis skills, defining and understanding requirements Knowledge of and experience using data models and data dictionaries in a Banking and Financial Markets context Can write SQL queries and navigate data bases especially Hive, CMD, Putty, Note++ Enthusiastic and energetic problem solver to join an ambitious team Good knowledge of SDLC and formal Agile processes, a bias towards TDD and a willingness to test products as part of the delivery cycle Ability to communicate effectively in a multi-programme environment across a range of stakeholders Attention to detail Good to have - Preferable knowledge and experience in Data Quality & Governance For Spark Scala - should have working experience using scala (preferable) or java for spark For Senior DAs: proven track record of managing small delivery-focussed data teams [09:07] Mishra, Aditi Show more Show less

Posted 1 week ago

Apply

5.0 - 10.0 years

15 - 16 Lacs

Bangalore Rural, Bengaluru

Work from Office

Naukri logo

Experience in designing, building, and managing data solutions on Azure. Design, develop, and optimize big data pipelines and architectures on Azure. Implement ETL/ELT processes using Azure Data Factory, Databricks, and Spark. Required Candidate profile 5yrs of exp in data engineering and big data technologies. Hands-on experience with Azure services (Azure Data Factory, Azure Synapse, Azure SQL, ADLS, etc.). Databricks Certification (Mandatory).

Posted 1 week ago

Apply

5.0 years

0 Lacs

Vishakhapatnam, Andhra Pradesh, India

On-site

Linkedin logo

Position : Azure Data Engineer Experience : 5+ years Location : Visakhapatnam Primary Skills : Azure Data Factory, Azure Synapse Analytics,PySpark,Scala,CI/CD Job Description: 5+ years of experience in data engineering or a related field. Strong hands-on experience with Azure Synapse Analytics and Azure Data Factory (ADF). Proven experience with Databricks, including development in PySpark or Scala. Proficiency in DBT for data modeling and transformation. Expertise in SQL and performance tuning techniques. Solid understanding of data warehousing concepts and ETL/ELT design patterns. Experience working in Agile environments and familiarity with Git-based version control. Strong communication and collaboration skills. Preferred Qualifications: Experience with CI/CD tools and DevOps for data engineering. Familiarity with Delta Lake and Lakehouse architecture. Exposure to other Azure services such as Azure Data Lake Storage (ADLS), Azure Key Vault, and Azure DevOps. Show more Show less

Posted 1 week ago

Apply

5.0 - 7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Title: Scala Developer Experience: 5 to 7 Years Location: Pune, Bangalore, Indore and Kolkata – Pune Preferred Employment Type: Full-Time Joining: Immediate or Early Joiners Preferred Job Summary: We are seeking a highly skilled Scala Developer with 5 to 7 years of experience to join our dynamic engineering team. The ideal candidate will have a strong foundation in Scala programming and proven experience working with leading cloud platforms such as AWS, Azure, or GCP. This role requires excellent communication and collaboration skills to effectively engage with cross-functional teams and stakeholders in a fast-paced, agile environment. Key Responsibilities: Design, develop, and maintain scalable backend applications using Scala Collaborate with product managers, architects, and fellow developers to deliver high-quality solutions Integrate with cloud services across AWS, Azure, or GCP , ensuring performance, scalability, and security Participate in code reviews, technical discussions, and continuous improvement initiatives Troubleshoot and resolve technical issues throughout the software development lifecycle Required Skills and Qualifications: 5 to 7 years of professional experience in software development Strong hands-on experience with Scala programming Working knowledge of cloud technologies – AWS, Azure, or GCP Solid understanding of distributed systems and microservices architecture Excellent communication and interpersonal skills Ability to work effectively in cross-functional, geographically distributed teams Preferred Qualifications: Experience with DevOps tools and CI/CD pipelines Familiarity with Agile/Scrum methodologies Knowledge of additional JVM-based languages or functional programming concepts is a plus What We Offer: Opportunity to work on cutting-edge technologies and cloud-native projects Collaborative and innovation-driven work culture Flexible work location with preference for Pune-based candidates Competitive compensation and career growth opportunities Ready to make an impact? Apply now and become a part of a forward-thinking team shaping the future of software development. Show more Show less

Posted 1 week ago

Apply

10.0 - 15.0 years

12 - 18 Lacs

Maharashtra

Work from Office

Naukri logo

Staff Software Engineers are the technology leaders of our highest impact projects. Your high energy is contagious, you actively collaborate with others across the engineering organization, and you seek to learn as much as you like to teach. You personify the notion of constant improvement as you work with your team and the larger engineering group to build software that delivers on our mission. You use your extraordinary technical competence to ensure a high bar for excellence while you mentor other engineers on their own path towards craftsmanship. You are most likely T-shaped, with broad knowledge across many technologies plus strong skills in a specific area. Staff Software Engineers embrace the opportunity to represent HMH in industry groups and open-source communities. Area of Responsibility: You will be working on the HMH Assessment Platform that is part of the HMH Educational Online/Digital Learning Platform. The Assessment team builds highly scalable and available platform. The platform is built using Microservices Architecture, Java microservices backend, REACT JavaScript UI Frontend, REST APIs, Postgres Database, AWS Cloud technologies, AWS Kafka, Kubernetes or Mesos orchestration, DataDog for logging/monitoring/alerting, Concourse CI or Jenkins, Maven etc. Responsibilities: Be the technical lead for feature development in a team of 5-10 engineers and influencing the technical direction of the overall engineering organization. Decompose business objectives into valuable, incrementally releasable user features accurately estimating the effort to complete each. Contribute code to feature development efforts demonstrating to others efficient design, delivery and testing patterns and techniques. Strive for high quality outcomes, continuously look for ways to improve team productivity and product reliability, performance, and security. Develop the talents and abilities of peers and colleagues. Create a memorable legacy as you progress toward your personal and professional objectives. Foster your personal and professional development continually seeking assignments that challenge you. Skills & Experience: Successful Candidates must demonstrate an appropriate combination of: 10+ years of experience as a software engineer. 3+ years of experience as a Staff or lead software engineer. Bachelor's degree in computer science or a STEM field. A portfolio of thought leadership and individual technical accomplishments. Full understanding of Agile software development methodologies and practices. Strong communication skills both verbal and written. Extensive experience working with technologies and concepts such: Behavior-driven or test-driven development JVM-based languages such as Java and Scala Development frameworks such as Spring Boot Asynchronous programming concepts, including Event processing Database technologies such as SQL, Postgres/MySQL, AWS Aurora DBs, Redshift, Liquibase or Flyway No-SQL technologies such as Redis, MongoDB and Cassandra Streaming technologies such as Apache Kafka, Apache Spark or Amazon Kinesis Unit-testing frameworks such as jUnit Performance testing frameworks such as Gatling Architectural concepts such as micro-services and separation of concerns Expert knowledge of class-based, object-oriented programming and design patterns Development tools such as GitHub, Jira, Jenkins, Concourse, and Maven Cloud technologies such as AWS and Azure Data Center Operating Technologies such as Kubernetes, Apache Mesos Apache Aurora, and TerraForm and container services such as Docker and Kubernetes Monitoring and operational data analysis practices and tools such as DataDog, Splunk and ELK.

Posted 1 week ago

Apply

7.0 - 10.0 years

10 - 16 Lacs

Bhubaneswar, Pune, Bengaluru

Work from Office

Naukri logo

About Client Hiring for One of the Most Prestigious Multinational Corporations Job Title : Big Data Engineer (Spark, Scala) , SQL Experience : 6 to 10 years Key Responsibilities : Design, develop, and optimize scalable big data pipelines using Apache Spark and Scala. Build batch and real-time data processing workflows to ingest, transform, and aggregate large datasets. Write high-performance SQL queries to support data analysis and reporting. Collaborate with data architects, data scientists, and business stakeholders to understand requirements and deliver high-quality data solutions. Ensure data quality, integrity, and governance across systems. Participate in code reviews and maintain best practices in data engineering. Troubleshoot and optimize performance of Spark jobs and SQL queries. Monitor and maintain production data pipelines and perform root cause analysis of data issues. Technical Skills : 6 to10 years of overall experience in software/data engineering. 4+ years of hands-on experience with Apache Spark using Scala. Strong proficiency in Scala and functional programming concepts. Extensive experience with SQL (preferably in distributed databases like Hive, Presto, Snowflake, or BigQuery). Experience working in Hadoop ecosystem (HDFS, Hive, HBase, Oozie, etc.). Knowledge of data modeling, data architecture, and ETL frameworks. Familiarity with version control (Git), CI/CD pipelines, and DevOps practices. Experience with cloud platforms (AWS, Azure, or GCP) is a plus. Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. Notice period : Till 60 days Location : BLR//BBSR/PUNE Mode of Work :WFO(Work From Office) Thanks & Regards, SWETHA Black and White Business Solutions Pvt.Ltd. Bangalore,Karnataka,INDIA. Contact Number:8067432433 rathy@blackwhite.in |www.blackwhite.in

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Key Responsibilities Partner with product managers, engineers, and business stakeholders to define KPIs and success metrics for Creator Success Create comprehensive dashboards and self-service analytics tools using QuickSight, Tableau, or similar BI platforms Perform deep-dive analysis on customer behavior, content performance, and livestream engagement patterns Design, build, and maintain robust ETL/ELT pipelines to process large volumes of streaming and batch data from Creator Success platform Develop and optimize data warehouses, data lakes, and real-time analytics systems using AWS services (Redshift, S3, Kinesis, EMR, Glue) Implement data quality frameworks and monitoring systems to ensure data accuracy and reliability Build automated data validation and alerting mechanisms for critical business metrics Generate actionable insights from complex datasets to drive product roadmap and business strategy Required Qualifications Bachelor's degree in Computer Science, Engineering, Mathematics, Statistics, or related quantitative field 3+ years of experience in business intelligence/analytic roles with proficiency in SQL, Python, and/or Scala Strong experience with AWS cloud services (Redshift, S3, EMR, Glue, Lambda, Kinesis) Expertise in building and optimizing ETL pipelines and data warehousing solutions Proficiency with big data technologies (Spark, Hadoop) and distributed computing frameworks Experience with business intelligence tools (QuickSight, Tableau, Looker) and data visualization best practices Collaborative approach with cross-functional teams including product, engineering, and business teams Customer-obsessed mindset with focus on delivering high-quality, actionable insights Non-Negotiable Skills High proficiency in SQL and Python Expertise in building and optimizing ETL pipelines and data warehousing solutions Experience with business intelligence tools (QuickSight, Tableau, Looker) and data visualization best practices Experience in working with cross-functional teams including product, engineering, and business teams Experience with AWS cloud services (Redshift, S3, EMR) Show more Show less

Posted 1 week ago

Apply

20.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title: Senior Data Solution Architect Job Summary: The Senior Data Solution Architect is a visionary and technical leader responsible for designing and guiding enterprise-scale data solutions. Leveraging 20+ years of experience, this individual works closely with business and IT stakeholders to deliver scalable, secure, and high-performing data architectures that support strategic goals, data-driven innovation, and digital transformation. This role encompasses solution design, platform modernization, cloud data architecture, and deep integration with enterprise systems. Key Responsibilities: Solution Architecture & Design Lead the end-of-the-end architecture of complex data solutions across domains including analytics, AI/ML, MDM, and real-time processing. Design robust, scalable, and future-ready data architectures using modern technologies (e.g., cloud data platforms, streaming, NoSQL, graph databases). Deliver solutions that balance performance, scalability, security, and cost-efficiency. Enterprise Data Integration Architect seamless data integration across legacy systems, SaaS platforms, IoT, APIs, and third-party data sources. Define and implement enterprise-wide ETL/ELT strategies using tools like Informatica, Talend, DBT, Azure Data Factory, or AWS Glue. Support real-time and event-driven architecture with tools such as Kafka, Spark Streaming, or Flink. Cloud Data Platforms & Infrastructure Design cloud-native data solutions on AWS, Azure, or GCP (e.g., Redshift, Snowflake, BigQuery, Databricks, Synapse). Lead cloud migration strategies from legacy systems to modern, cloud-based data architectures. Define standards for cloud data governance, cost management, and performance optimization. Data Governance, Security & Compliance Partner with governance teams to enforce enterprise data governance frameworks. Ensure solutions comply with regulations such as GDPR, HIPAA, CCPA, and industry-specific mandates. Embed security and privacy by design in data architectures (encryption, role-based access, masking, etc.). Technical Leadership & Stakeholder Engagement Serve as a technical advisor to CIOs, CDOs, and senior business executives on data strategy and platform decisions. Mentor architecture and engineering teams; provide guidance on solution patterns and best practices. Facilitate architecture reviews, proof-of-concepts (POCs), and technology evaluations. Innovation & Continuous Improvement Stay abreast of emerging trends in data engineering, AI, data mesh, data fabric, and edge computing. Evaluate and introduce innovative tools and patterns (e.g., serverless data pipelines, federated data access). Drive architectural modernization, legacy decommissioning, and platform simplification. Qualifications: Education: Bachelor’s degree in computer science, Engineering, Information Systems, or related field; Master’s or MBA preferred. Experience: 20+ years in IT with at least 10 years in data architecture or solution architecture roles. Demonstrated experience in large-scale, complex data platform architecture and enterprise transformations. Deep experience with multiple database technologies (SQL, NoSQL, columnar, time series). Strong programming/scripting background (e.g., Python, Scala, Java, SQL). Proven experience architecting on at least one major cloud provider (AWS, Azure, GCP). Familiarity with DevOps, CI/CD, and DataOps practices. Preferred Certifications: AWS/Azure/GCP Solution Architect (Professional level preferred) TOGAF or Zachman Framework Certification Snowflake/Databricks Certified Architect CDMP (Certified Data Management Professional) or DGSP Key Competencies: Strategic and conceptual thinking with the ability to translate business needs into technical solutions. Exceptional communication, presentation, and negotiation skills. Leadership in cross-functional teams and matrix environments. Deep understanding of business processes, data monetization, and digital strategy. Success Indicators: Delivery of transformative data platforms that enhance analytics and decision-making. Improved data integration, quality, and access across the enterprise. Successful migration to cloud-native or hybrid architectures. Reduction of technical debt and legacy system dependencies. Increased reuse of solution patterns, accelerators, and frameworks. Show more Show less

Posted 1 week ago

Apply

8.0 - 12.0 years

8 - 13 Lacs

Bengaluru

Work from Office

Naukri logo

Happiest Minds Technologies Pvt.Ltd is looking for Sr Data and ML Engineer to join our dynamic team and embark on a rewarding career journey Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs. Spark ML Lib,Scala,Python,Databricks on AWS, Snowflake, GitLab, Jenkins, AWS DevOps CI/CD pipeline, Machine Learning, Airflow

Posted 1 week ago

Apply

4.0 - 9.0 years

12 - 16 Lacs

Bengaluru

Work from Office

Naukri logo

Build, deploy, and maintain machine learning models in production. Automate model training, evaluation, and monitoring pipelines. Collaborate with data engineers to ensure the availability of clean, high-quality data. Optimize model performance and computational efficiency. Document ML workflows and processes for scalability and reproducibility. Key Skills: Proficiency in Python, Scala, or Java. Experience with ML tools like TensorFlow, PyTorch, and MLflow. Familiarity with MLOps practices and tools like Docker, Kubernetes, and CI/CD pipelines. Strong problem-solving and analytical skills Machine Learning,Python,Scala,Java

Posted 1 week ago

Apply

5.0 - 8.0 years

10 - 14 Lacs

Bengaluru

Work from Office

Naukri logo

BS or higher degree in Computer Science (or equivalent field) 3-6+ years of programming experience with Java and Python Strong in writing SQL queries and understanding of Kafka, Scala, Spark/Flink Exposure to AWS Lambda, AWS Cloud Watch, Step Functions, EC2, Cloud Formation, Jenkins

Posted 1 week ago

Apply

10.0 - 15.0 years

8 - 13 Lacs

Bengaluru

Work from Office

Naukri logo

Good knowledge on Broadcast eco-system and content processing elements including workflows Hands-on in BMS, Traffic and Playout One globally renowed OEM product in each area Good Knowledge on dealing with currency data, Reports from Neilsen/BARC Good understanding of Sales function in broadcast including Traffic and currency, Affiliate, Non-Linear distribution Has worked/certified on cloud with experience on running and porting media systems into cloud. Knowledge on OEM products dealing with DAM/MAM/CMS Should have good understanding of content processing flow including pre-prod, prod and distribution. Good exposure to emerging technologies like Data Analytics and Gen AI in solving practical industry problems. Experience on content processing elements, streaming standards and protocols is an advantage. JD For Media Consultant Engages with customer and brings in value through prolific Solutioning. Be the domain consultant and act as bridge between customer and the delivery teams. Translate business requirements into clear and concise functional specifications and solutions for technical teams. Propose innovative and practical solutions to address market and business challenges. Work and develop relationships with partners, working with them to create market-led solutions. Constantly be on the lookout for ways to create solutions that deliver better value to the customers. Work with BDM and plan sales strategies in response to market and key accounts. Take ownership of opportunities and preparation of response to RFP/RFI or ad-hoc requirements working with other stake holders.

Posted 1 week ago

Apply

3.0 - 5.0 years

11 - 16 Lacs

Bengaluru

Work from Office

Naukri logo

Azure Databricks Experience; 3 to 5 years Proficiency in Databricks, Apache Spark, and Delta Lake Strong understanding of cloud platforms such as AWS, Azure, or GCP Experience with SQL, Python, Scala, and/or R Familiarity with data warehousing concepts and ETL processes Problem-Solving: Excellent analytical and problem-solving skills with a keen attention to detail Data bricks Associate certification

Posted 1 week ago

Apply

7.0 - 10.0 years

10 - 14 Lacs

Gurugram, Bengaluru

Work from Office

Naukri logo

We are looking for an experienced Senior Big Data Developer to join our team and help build and optimize high-performance, scalable, and resilient data processing systems. You will work in a fast-paced startup environment, handling highly loaded systems and developing data pipelines that process billions of records in real time. As a key member of the Big Data team, you will be responsible for architecting and optimizing distributed systems, leveraging modern cloud-native technologies, and ensuring high availability and fault tolerance in our data infrastructure. Primary Responsibilities: Design, develop, and maintain real-time and batch processing pipelines using Apache Spark, Kafka, and Kubernetes. Architect high-throughput distributed systems that handle large-scale data ingestion and processing. Work extensively with AWS services, including Kinesis, DynamoDB, ECS, S3, and Lambda. Manage and optimize containerized workloads using Kubernetes (EKS) and ECS. Implement Kafka-based event-driven architectures to support scalable, low-latency applications. Ensure high availability, fault tolerance, and resilience of data pipelines. Work with MySQL, Elasticsearch, Aerospike, Redis, and DynamoDB to store and retrieve massive datasets efficiently. Automate infrastructure provisioning and deployment using Terraform, Helm, or CloudFormation. Optimize system performance, monitor production issues, and ensure efficient resource utilization. Collaborate with data scientists, backend engineers, and DevOps teams to support advanced analytics and machine learning initiatives. Continuously improve and modernize the data architecture to support growing business needs. Required Skills: 7-10+ years of experience in big data engineering or distributed systems development. Expert-level proficiency in Scala, Java, or Python. Deep understanding of Kafka, Spark, and Kubernetes in large-scale environments. Strong hands-on experience with AWS (Kinesis, DynamoDB, ECS, S3, etc.). Proven experience working with highly loaded, low-latency distributed systems. Experience with Kafka, Kinesis, Flink, or other streaming technologies for event-driven architectures. Expertise in SQL and database optimizations for MySQL, Elasticsearch, and NoSQL stores. Strong experience in automating infrastructure using Terraform, Helm, or CloudFormation. Experience managing production-grade Kubernetes clusters (EKS). Deep knowledge of performance tuning, caching strategies, and data consistency models. Experience working in a startup environment, adapting to rapid changes and building scalable solutions from scratch. Nice to Have Experience with machine learning pipelines and AI-driven analytics. Knowledge of workflow orchestration tools such as Apache Airflow.

Posted 1 week ago

Apply

2.0 - 8.0 years

6 - 10 Lacs

Kolkata, Mumbai, Hyderabad

Work from Office

Naukri logo

- PowerBI and AAS expert (Strong SC or Specialist Senior) - Should have hands-on experience of Data Modelling in Azure SQL Data Ware House and Azure Analysis Service - Should be able twrite and test Dex queries - Should be able generate Paginated Reports in PowerBI - Should have minimum 3 Years' working experience in delivering projects in PowerBI ROLE 2 : - DataBricks expert (Strong SC or Specialist Senior) - Should have minimum 3 years' working experience of writing code in Spark and Scala ROLE 3: - One Azure backend expert (Strong SC or Specialist Senior) - Should have hands-on experience of working with ADLS, ADF and Azure SQL DW - Should have minimum 3 Year's working experience of delivering Azure projects

Posted 1 week ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Role Description: Sr. Data Engineer – Big Data The ideal candidate is a hands-on technology developer with experience in developing scalable applications and platforms. They must be at ease working in an agile environment with little supervision. The person should be a self-motivated person with a passion for problem solving and continuous learning. Role and responsibilities •Strong technical, analytical, and problem-solving skills •Strong organizational skills, with the ability to work autonomously as well as in a team-based environment • Data pipeline framework development Technical skills requirements The candidate must demonstrate proficiency in, •CDH On-premise for data processing and extraction •Ability to own and deliver on large, multi-faceted projects •Fluency in complex SQL and experience with RDBMSs • Project Experience in CDH experience, Spark, PySpark, Scala, Python, NiFi, Hive, NoSql DBs) Experience designing and building big data pipelines Experience working on large scale, distributed systems \ •Strong hands-on experience of programming language like PySpark, Scala with Spark, Python. Certification in Hadoop/Big Data – Hortonworks/Cloudera •Unix or Shell scripting •Strong delivery background across the delivery of high-value, business-facing technical projects in major organizations •Experience of managing client delivery teams, ideally coming from a Data Engineering / Data Science environment Job Types: Full-time, Permanent Benefits: Health insurance Provident Fund Schedule: Day shift Ability to commute/relocate: Gurugram, Haryana: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Are you serving notice period at your current organization? Education: Bachelor's (Required) Experience: Python: 3 years (Required) Work Location: In person Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Data scientist with strong background in data mining, machine learning, recommendation systems, and statistics. Should possess signature strengths of a qualified mathematician with ability to apply concepts of Mathematics, Applied Statistics, with specialization in one or more of NLP, Computer Vision, Speech, Data mining to develop models that provide effective solution. A strong data engineering background with hands-on coding capabilities is needed to own and deliver outcomes. A Master’s or PhD Degree in a highly quantitative field (Computer Science, Machine Learning, Operational Research, Statistics, Mathematics, etc.) or equivalent experience, 5+ years of industry experience in predictive modelling, data science and analysis, with prior experience in a ML or data scientist role and a track record of building ML or DL models. Responsibilities and skills Work with our customers to deliver a ML/DL project from beginning to end, including understanding the business need, aggregating data, exploring data, building & validating predictive models and deploying completed models to deliver business impact to organizations. Selecting features, building and optimizing classifiers using ML techniques. Data mining using state-of-the-art methods, creating text mining pipelines to clean & process large unstructured datasets to reveal high-quality information and hidden insights using machine learning techniques. Should be able to appreciate and work on: Should be able to appreciate and work on Computer Vision problems, for example, extract rich information from images to categorize and process visual data, develop machine learning algorithms for object and image classification, experience in using DBScan, PCA, Random Forests and Multinomial Logistic Regression to select the best features to classify objects. OR Deep understanding of NLP such as fundamentals of information retrieval, deep learning approaches, transformers, attention models, text summarisation, attribute extraction etc. Preferable experience in one or more of the following areas: recommender systems, moderation of user-generated content, sentiment analysis, etc. OR Speech recognition, speech to text and vice versa, understanding NLP and IR, text summarisation, statistical and deep learning approaches to text processing. Experience of having worked in these areas. Excellent understanding of machine learning techniques and algorithms, such as k-NN, Naive Bayes, SVM, Decision Forests, etc. Appreciation for deep learning frameworks like MXNet, Caffe 2, Keras, Tensorflow. Experience in working with GPUs to develop models, handling terabyte-size datasets. Experience with common data science toolkits such as R, Weka, NumPy, MatLab, mlr, mllib, Scikit learn, caret etc – excellence in at least one of these is highly desirable. Should be able to work hands-on in Python, R etc. Should closely collaborate & work with engineering teams to iteratively analyse data using Scala, Spark, Hadoop, Kafka, Storm etc. Experience with NoSQL databases and familiarity with data visualization tools will be of great advantage. What will you experience in terms of culture at Sahaj? A culture of trust, respect and transparency Opportunity to collaborate with some of the finest minds in the industry Work across multiple domains What are the benefits of being at Sahaj? Unlimited leaves Life insurance & private health insurance Stock options No hierarchy Open Salaries Show more Show less

Posted 1 week ago

Apply

Exploring Scala Jobs in India

Scala is a popular programming language that is widely used in India, especially in the tech industry. Job seekers looking for opportunities in Scala can find a variety of roles across different cities in the country. In this article, we will dive into the Scala job market in India and provide valuable insights for job seekers.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Chennai
  5. Mumbai

These cities are known for their thriving tech ecosystem and have a high demand for Scala professionals.

Average Salary Range

The salary range for Scala professionals in India varies based on experience levels. Entry-level Scala developers can expect to earn around INR 6-8 lakhs per annum, while experienced professionals with 5+ years of experience can earn upwards of INR 15 lakhs per annum.

Career Path

In the Scala job market, a typical career path may look like: - Junior Developer - Scala Developer - Senior Developer - Tech Lead

As professionals gain more experience and expertise in Scala, they can progress to higher roles with increased responsibilities.

Related Skills

In addition to Scala expertise, employers often look for candidates with the following skills: - Java - Spark - Akka - Play Framework - Functional programming concepts

Having a good understanding of these related skills can enhance a candidate's profile and increase their chances of landing a Scala job.

Interview Questions

Here are 25 interview questions that you may encounter when applying for Scala roles:

  • What is Scala and why is it used? (basic)
  • Explain the difference between val and var in Scala. (basic)
  • What is pattern matching in Scala? (medium)
  • What are higher-order functions in Scala? (medium)
  • How does Scala support functional programming? (medium)
  • What is a case class in Scala? (basic)
  • Explain the concept of currying in Scala. (advanced)
  • What is the difference between map and flatMap in Scala? (medium)
  • How does Scala handle null values? (medium)
  • What is a trait in Scala and how is it different from an abstract class? (medium)
  • Explain the concept of implicits in Scala. (advanced)
  • What is the Akka toolkit and how is it used in Scala? (medium)
  • How does Scala handle concurrency? (advanced)
  • Explain the concept of lazy evaluation in Scala. (advanced)
  • What is the difference between List and Seq in Scala? (medium)
  • How does Scala handle exceptions? (medium)
  • What are Futures in Scala and how are they used for asynchronous programming? (advanced)
  • Explain the concept of type inference in Scala. (medium)
  • What is the difference between object and class in Scala? (basic)
  • How can you create a Singleton object in Scala? (basic)
  • What is a higher-kinded type in Scala? (advanced)
  • Explain the concept of for-comprehensions in Scala. (medium)
  • How does Scala support immutability? (medium)
  • What are the advantages of using Scala over Java? (basic)
  • How do you implement pattern matching in Scala? (medium)

Closing Remark

As you explore Scala jobs in India, remember to showcase your expertise in Scala and related skills during interviews. Prepare well, stay confident, and you'll be on your way to a successful career in Scala. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies