Home
Jobs

2896 Scala Jobs - Page 26

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

4 - 9 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Naukri logo

About us Grid Dynamics (NASDAQ: GDYN) is a leading provider of technology consulting, platform and product engineering, AI, and advanced analytics services. Fusing technical vision with business acumen, we solve the most pressing technical challenges and enable positive business outcomes for enterprise companies undergoing business transformation. A key differentiator for Grid Dynamics is our 8 years of experience and leadership in enterprise AI, supported by profound expertise and ongoing investment in data, analytics, cloud & DevOps, application modernization and customer experience. Founded in 2006, Grid Dynamics is headquartered in Silicon Valley with offices across the Americas, Europe, and India. Role & responsibilities We are looking for an enthusiastic and technology-proficient Big Data Engineer, who is eager to participate in the design and implementation of a top-notch Big Data solution to be deployed at massive scale. Our customer is one of the world's largest technology companies based in Silicon Valley with operations all over the world. On this project we are working on the bleeding-edge of Big Data technology to develop high performance data analytics platform, which handles petabytes datasets. Essential functions Participate in design and development of Big Data analytical applications. Design, support and continuously enhance the project code base, continuous integration pipeline, etc. Write complex ETL processes and frameworks for analytics and data management. Implement large-scale near real-time streaming data processing pipelines. Work inside the team of industry experts on the cutting edge Big Data technologies to develop solutions for deployment at massive scale. Qualifications Strong coding experience with Scala, Spark,Hive, Hadoop. In-depth knowledge of Hadoop and Spark, experience with data mining and stream processing technologies (Kafka, Spark Streaming, Akka Streams). Understanding of the best practices in data quality and quality engineering. Experience with version control systems, Git in particular. Desire and ability for quick learning of new tools and technologies. Would be a plus Knowledge of Unix-based operating systems (bash/ssh/ps/grep etc.). Experience with Github-based development processes. Experience with JVM build systems (SBT, Maven, Gradle). We offer Opportunity to work on bleeding-edge projects Work with a highly motivated and dedicated team Competitive salary Flexible schedule Benefits package - medical insurance, sports Corporate social events Professional development opportunities Well-equipped office

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Linkedin logo

When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What you’ll be doing… We are looking for data engineers who can work with world class team members to help drive telecom business to its full potential . We are building data products / assets for telecom wireless and wireline business which includes consumer analytics, telecom network performance and service assurance analytics etc. We are working on cutting edge technologies like digital twin to build these analytical platforms and provide data support for varied AI ML implementations. As a data engineer you will be collaborating with business product owners , coaches , industry renowned data scientists and system architects to develop strategic data solutions from sources which includes batch, file and data streams As a Data Engineer with ETL/ELT expertise for our growing data platform & analytics teams, you will understand and enable the required data sets from different sources both structured and unstructured data into our data warehouse and data lake with real-time streaming and/or batch processing to generate insights and perform analytics for business teams within Verizon. Understanding the business requirements and the technical design. Working on Data Ingestion, Preparation and Transformation. Developing data streaming applications. Debugging the production failures and identifying the solution. Working on ETL/ELT development. Where You'll Be Working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. What we’re looking for... You’re curious about new technologies and the game-changing possibilities it creates. You like to stay up-to-date with the latest trends and apply your technical expertise to solving business problems. You’ll Need To Have Bachelor’s degree or one or more years of work experience. Experience with Data Warehouse concepts and Data Management life cycle. Experience in any DBMS Experience in Shell scripting, Spark, Scala. Knowledge in GCP/BigQuery. Even Better If You Have Two or more years of relevant experience. Any relevant Certification on ETL/ELT developer. Certification in GCP-Data Engineer. Accuracy and attention to detail. Good problem solving, analytical, and research capabilities. Good verbal and written communication. Experience presenting to and influence stakeholders. #AI&D Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description Responsible to assemble large, complex sets of data that meet non-functional and functional business requirements. Responsible to identify, design and implement internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes. Building required infrastructure for optimal extraction, transformation and loading of data from various data sources using Azure, Databricks and SQL technologies Responsible for the transformation of conceptual algorithms from R&D into efficient, production ready code. The data developer must have a strong mathematical background in order to be able to document and maintain the code Responsible for integrating finished models into larger data processes using UNIX scripting languages such as ksh, Python, Spark, Scala, etc. Produce and maintain documentation for released data sets, new programs, shared utilities, or static data. This must be done within department standards Ensure quality deliverables to clients by following existing quality processes, manually calculating comparison data, developing statistical pass/fail testing, and visually inspecting data for reasonableness: the requirement is on-time with zero defects Qualifications Education/Training B.E./B.Tech. with a major in Computer Science, BIS, CIS, Electrical Engineering, Operations Research or some other technical field. Course work or experience in Numerical Analysis, Mathematics or Statistics is a plus Hard Skills Proven experience working as a data engineer Highly proficient in using the spark framework (python and/or Scala) Extensive knowledge of Data Warehousing concepts, strategies, methodologies. Programming experience in Python, SQL, Scala Direct experience of building data pipelines using Apache Spark (preferably in Databricks), Airflow. Hands on experience designing and delivering solutions using Azure including Azure Storage, Azure SQL Data Warehouse, Azure Data Lake Experience with big data technologies (Hadoop) Databricks & Azure Big Data Architecture Certification would be plus Must be team oriented with strong collaboration, prioritization, and adaptability skills required Ability to write highly efficient code in terms of performance / memory utilization Basic knowledge of SQL; capable of handling common functions Experience Minimum 5 -8 year of experience as Data engineer Experience modeling or manipulating large amounts of data is a plus Experience with Demographic, Retail business is a plus Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion Show more Show less

Posted 1 week ago

Apply

3.0 years

15 - 20 Lacs

Gurgaon

On-site

GlassDoor logo

Profile - Sr Data Engineer JOB DESCRIPTION - Experience: 3 + Years Office Location: Phase IV, Udyog Vihar, Sector 18, Gurugram Working Location: HYDERABAD / GURGAON Interview Mode: F2F Work Mode: Hybrid Job Summary: The ideal candidate is a hands-on technology developer with experience in developing scalable applications and platforms. They must be at ease working in an agile environment with little supervision. The person should be a self-motivated person with a passion for problem solving and continuous learning Role and responsibilities: Strong technical, analytical, and problem-solving skills Strong organizational skills, with the ability to work autonomously as well as in a team-based environment Data pipeline framework development Technical skills requirements: CDH On-premise for data processing and extraction Ability to own and deliver on large, multi-faceted projects Fluency in complex SQL and experience with RDBMSs Project Experience in CDH experience, Spark, PySpark, Scala, Python, NiFi, Hive, NoSql DBs) Experience designing and building big data pipelines Experience working on large scale, distributed systems Strong hands-on experience of programming language like PySpark, Scala with Spark, Python. Certification in Hadoop/Big Data – Hortonworks/Cloudera Unix or Shell scripting Strong delivery background across the delivery of high-value, business-facing technical projects in major organizations Experience of managing client delivery teams, ideally coming from a Data Engineering / Data Science environment Qualifications: B.Tech /M.Tech /MS or BCA/MCA degree from a reputed university Job Type: Full-time Pay: ₹1,500,000.00 - ₹2,000,000.00 per year Schedule: Day shift Work Location: In person

Posted 1 week ago

Apply

4.0 years

0 Lacs

Gurgaon

On-site

GlassDoor logo

Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Hyderabad, Telangana, India; Bengaluru, Karnataka, India; Gurgaon, Haryana, India; Pune, Maharashtra, India . Minimum qualifications: Bachelor's degree in Computer Science, Engineering, Mathematics, or a related field, or equivalent practical experience. 4 years of experience in developing and troubleshooting data processing algorithms. Experience coding with one or more programming languages (e.g., Java, Python) and Bigdata technologies such as Scala, Spark and hadoop frameworks. Experience with one public cloud provider, such as GCP. Preferred qualifications: Experience architecting, developing software, or internet scale production-grade Big Data solutions in virtualized environments. Experience in Big Data, information retrieval, data mining, or Machine Learning. Experience with data warehouses, technical architectures, infrastructure components, Extract Transform and Load/Extract, Load and Transform and reporting/analytic tools, environments, and data structures. Experience in building multi-tier applications with modern technologies such as NoSQL, MongoDB, SparkML, and TensorFlow. Experience with Infrastructure as Code and Continuous Integration/Continuous Deployment tools like Terraform, Ansible, Jenkins. Understanding one database type, with the ability to write complex SQL queries. About the job The Google Cloud Platform team helps customers transform and build what's next for their business — all with technology built in the cloud. Our products are developed for security, reliability and scalability, running the full stack from infrastructure to applications to devices and hardware. Our teams are dedicated to helping our customers — developers, small and large businesses, educational institutions and government agencies — see the benefits of our technology come to life. As part of an entrepreneurial team in this rapidly growing business, you will play a key role in understanding the needs of our customers and help shape the future of businesses of all sizes use technology to connect with customers, employees and partners. As a Strategic Cloud Data Engineer, you will guide customers on how to ingest, store, process, analyze, and explore/visualize data on the Google Cloud Platform. You will work on data migrations and modernization projects, and with customers to design data processing systems, develop data pipelines optimized for scaling, and troubleshoot potential platform/product challenges. You will have an understanding of data governance and security controls. You will travel to customer sites to deploy solutions and deliver workshops to educate and empower customers. Additionally, you will work with Product Management and Product Engineering teams to build and constantly drive excellence in our products. Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Interact with stakeholders to translate complex customer requirements into recommendations for appropriate solution architectures and advisory services. Engage with technical leads, and partners to lead high velocity migration and modernisation to Google Cloud Platform (GCP). Design, Migrate/Build and Operationalise data storage and processing infrastructure using Cloud native products. Develop and implement data quality and governance procedures to ensure the accuracy and reliability of data. Take various project requirements and organize them into clear goals and objectives, and create a work breakdown structure to manage internal and external stakeholders. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Linkedin logo

When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What You’ll Be Doing… We are looking for data engineers who can work with world class team members to help drive telecom business to its full potential. We are building data products / assets for telecom wireless and wireline business which includes consumer analytics, telecom network performance and service assurance analytics etc. We are working on cutting edge technologies like digital twin to build these analytical platforms and provide data support for varied AI ML implementations. As a data engineer you will be collaborating with business product owners, coaches, industry renowned data scientists and system architects to develop strategic data solutions from sources which includes batch, file and data streams As a Data Engineer with ETL/ELT expertise for our growing data platform & analytics teams, you will understand and enable the required data sets from different sources both structured and unstructured data into our data warehouse and data lake with real-time streaming and/or batch processing to generate insights and perform analytics for business teams within Verizon. Understanding the business requirements and the technical design. Working on Data Ingestion, Preparation and Transformation. Developing data streaming applications. Debugging the production failures and identifying the solution. Working on ETL/ELT development. What We’re Looking For... You’re curious about new technologies and the game-changing possibilities it creates. You like to stay up-to-date with the latest trends and apply your technical expertise to solving business problems. #AI&D You’ll need to have… Bachelor’s degree or one or more years of work experience. Experience with Data Warehouse concepts and Data Management life cycle. Experience in any DBMS Experience in Shell scripting, Spark, Scala. Experience in GCP, Cloud Composer and BigQuery Even better if you have one or more of the following Two or more years of relevant experience. Any relevant Certification on ETL/ELT developer. Certification in GCP-Data Engineer. Accuracy and attention to detail. Good problem solving, analytical, and research capabilities. Good verbal and written communication. Experience presenting to and influence stakeholders. Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Full-time Job Description Responsible to assemble large, complex sets of data that meet non-functional and functional business requirements. Responsible to identify, design and implement internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes. Building required infrastructure for optimal extraction, transformation and loading of data from various data sources using Azure, Databricks and SQL technologies Responsible for the transformation of conceptual algorithms from R&D into efficient, production ready code. The data developer must have a strong mathematical background in order to be able to document and maintain the code Responsible for integrating finished models into larger data processes using UNIX scripting languages such as ksh, Python, Spark, Scala, etc. Produce and maintain documentation for released data sets, new programs, shared utilities, or static data. This must be done within department standards Ensure quality deliverables to clients by following existing quality processes, manually calculating comparison data, developing statistical pass/fail testing, and visually inspecting data for reasonableness: the requirement is on-time with zero defects Qualifications Education/Training B.E./B.Tech. with a major in Computer Science, BIS, CIS, Electrical Engineering, Operations Research or some other technical field. Course work or experience in Numerical Analysis, Mathematics or Statistics is a plus Hard Skills Proven experience working as a data engineer Highly proficient in using the spark framework (python and/or Scala) Extensive knowledge of Data Warehousing concepts, strategies, methodologies. Programming experience in Python, SQL, Scala Direct experience of building data pipelines using Apache Spark (preferably in Databricks), Airflow. Hands on experience designing and delivering solutions using Azure including Azure Storage, Azure SQL Data Warehouse, Azure Data Lake Experience with big data technologies (Hadoop) Databricks & Azure Big Data Architecture Certification would be plus Must be team oriented with strong collaboration, prioritization, and adaptability skills required Ability to write highly efficient code in terms of performance / memory utilization Basic knowledge of SQL; capable of handling common functions Experience Minimum 5 -8 year of experience as Data engineer Experience modeling or manipulating large amounts of data is a plus Experience with Demographic, Retail business is a plus Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion I'm interested I'm interested Privacy Policy Show more Show less

Posted 1 week ago

Apply

8.0 years

6 - 8 Lacs

Chennai

On-site

GlassDoor logo

Develop, test, and deploy data processing applications using Apache Spark and Scala. Optimize and tune Spark applications for better performance on large-scale data sets. Work with the Cloudera Hadoop ecosystem (e.g., HDFS, Hive, Impala, HBase, Kafka) to build data pipelines and storage solutions. Collaborate with data scientists, business analysts, and other developers to understand data requirements and deliver solutions. Design and implement high-performance data processing and analytics solutions. Ensure data integrity, accuracy, and security across all processing tasks. Troubleshoot and resolve performance issues in Spark, Cloudera, and related technologies. Implement version control and CI/CD pipelines for Spark applications. Required Skills & Experience: Minimum 8 years of experience in application development. Strong hands on experience in Apache Spark, Scala, and Spark SQL for distributed data processing. Hands-on experience with Cloudera Hadoop (CDH) components such as HDFS, Hive, Impala, HBase, Kafka, and Sqoop. Familiarity with other Big Data technologies, including Apache Kafka, Flume, Oozie, and Nifi. Experience building and optimizing ETL pipelines using Spark and working with structured and unstructured data. Experience with SQL and NoSQL databases such as HBase, Hive, and PostgreSQL. Knowledge of data warehousing concepts, dimensional modeling, and data lakes. Ability to troubleshoot and optimize Spark and Cloudera platform performance. Familiarity with version control tools like Git and CI/CD tools (e.g., Jenkins, GitLab).

Posted 1 week ago

Apply

5.0 years

8 - 10 Lacs

Chennai

On-site

GlassDoor logo

Job Description Responsible to assemble large, complex sets of data that meet non-functional and functional business requirements. Responsible to identify, design and implement internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes. Building required infrastructure for optimal extraction, transformation and loading of data from various data sources using Azure, Databricks and SQL technologies Responsible for the transformation of conceptual algorithms from R&D into efficient, production ready code. The data developer must have a strong mathematical background in order to be able to document and maintain the code Responsible for integrating finished models into larger data processes using UNIX scripting languages such as ksh, Python, Spark, Scala, etc. Produce and maintain documentation for released data sets, new programs, shared utilities, or static data. This must be done within department standards Ensure quality deliverables to clients by following existing quality processes, manually calculating comparison data, developing statistical pass/fail testing, and visually inspecting data for reasonableness: the requirement is on-time with zero defects Qualifications Education/Training B.E./B.Tech. with a major in Computer Science, BIS, CIS, Electrical Engineering, Operations Research or some other technical field. Course work or experience in Numerical Analysis, Mathematics or Statistics is a plus Hard Skills Proven experience working as a data engineer Highly proficient in using the spark framework (python and/or Scala) Extensive knowledge of Data Warehousing concepts, strategies, methodologies. Programming experience in Python, SQL, Scala Direct experience of building data pipelines using Apache Spark (preferably in Databricks), Airflow. Hands on experience designing and delivering solutions using Azure including Azure Storage, Azure SQL Data Warehouse, Azure Data Lake Experience with big data technologies (Hadoop) Databricks & Azure Big Data Architecture Certification would be plus Must be team oriented with strong collaboration, prioritization, and adaptability skills required Ability to write highly efficient code in terms of performance / memory utilization Basic knowledge of SQL; capable of handling common functions Experience Minimum 5 -8 year of experience as Data engineer Experience modeling or manipulating large amounts of data is a plus Experience with Demographic, Retail business is a plus Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion

Posted 1 week ago

Apply

3.0 years

4 - 7 Lacs

Chennai

On-site

GlassDoor logo

- 3+ years of data engineering experience - Experience with data modeling, warehousing and building ETL pipelines - Experience with SQL Amazon Retail Financial Intelligence Systems is seeking a seasoned and talented Senior Data Engineer to join the Fortune Platform team. Fortune is a fast growing team with a mandate to build tools to automate profit-and-loss forecasting and planning for the Physical Consumer business. We are building the next generation Business Intelligence solutions using big data technologies such as Apache Spark, Hive/Hadoop, and distributed query engines. As a Data Engineer in Amazon, you will be working in a large, extremely complex and dynamic data environment. You should be passionate about working with big data and are able to learn new technologies rapidly and evaluate them critically. You should have excellent communication skills and be able to work with business owners to translate business requirements into system solutions. You are a self-starter, comfortable with ambiguity, and working in a fast-paced and ever-changing environment. Ideally, you are also experienced with at least one of the programming languages such as Java, C++, Spark/Scala, Python, etc. Major Responsibilities: - Work with a team of product and program managers, engineering leaders, and business leaders to build data architectures and platforms to support business - Design, develop, and operate high-scalable, high-performance, low-cost, and accurate data pipelines in distributed data processing platforms - Recognize and adopt best practices in data processing, reporting, and analysis: data integrity, test design, analysis, validation, and documentation - Keep up to date with big data technologies, evaluate and make decisions around the use of new or existing software products to design the data architecture - Design, build and own all the components of a high-volume data warehouse end to end. - Provide end-to-end data engineering support for project lifecycle execution (design, execution and risk assessment) - Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers - Interface with other technology teams to extract, transform, and load (ETL) data from a wide variety of data sources - Own the functional and nonfunctional scaling of software systems in your ownership area. - Implement big data solutions for distributed computing. Key job responsibilities As a DE on our team, you will be responsible for leading the data modelling, database design, and launch of some of the core data pipelines. You will have significant influence on our overall strategy by helping define the data model, drive the database design, and spearhead the best practices to delivery high quality products. About the team Profit intelligence systems measures, predicts true profit(/loss) for each item as a result of a specific shipment to an Amazon customer. Profit Intelligence is all about providing intelligent ways for Amazon to understand profitability across retail business. What are the hidden factors driving the growth or profitability across millions of shipments each day? We compute the profitability of each and every shipment that gets shipped out of Amazon. Guess what, we predict the profitability of future possible shipments too. We are a team of agile, can-do engineers, who believe that not only are moon shots possible but that they can be done before lunch. All it takes is finding new ideas that challenge our preconceived notions of how things should be done. Process and procedure matter less than ideas and the practical work of getting stuff done. This is a place for exploring the new and taking risks. We push the envelope in using cloud services in AWS as well as the latest in distributed systems, forecasting algorithms, and data mining. Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 1 week ago

Apply

7.0 - 12.0 years

0 Lacs

Mysore, Karnataka, India

On-site

Linkedin logo

Job Responsibilities Develop teaching materials including exercises & assignments Conduct classroom training / virtual training Design assessments for various proficiency levels in a given competency Enhance course material & course delivery based on feedback to improve training effectiveness Gather feedback from stakeholders, identify actions based on feedback and implement changes Location: Mysore, Mangalore, Bangalore, Chennai, Pune, Hyderabad, Chandigarh Description of the Profile We are looking for trainers with 7 to 12 years of teaching experience and technology know-how in one or more of the following areas: Java – Java programming, Spring, Angular / React, Bootstrap Microsoft – C# programming, SQL Server, ADO.NET, ASP.NET, MVC design pattern, Azure, MS Power platforms, MS Dynamics 365 CRM, MS Dynamics 365 ERP, SharePoint Testing – Selenium, Microfocus UFT, Microfocus-ALM tools, SOA testing, SOAPUI, Rest assured, Appium Big Data – Python programming, Hadoop, Spark, Scala, Mongo DB, NoSQL SAP – SAP ABAP programming / SAP MM / SAP SD /SAP BI / SAP S4 HANA Oracle – Oracle E-Business Suite (EBS) / PeopleSoft / Siebel CRM / Oracle Cloud / OBIEE / Fusion Middleware API and integration – API, Microservices, TIBCO, APIGee, Mule Digital Commerce – SalesForce, Adobe Experience Manager Digital Process Automation PEGA, Appian, Camunda, Unqork, UIPath MEAN / MERN stacks Business Intelligence – SQL Server, ETL using SQL Server, Analysis using SQL Server, Enterprise reporting using SQL, Visualization Data Science – Python for data science, Machine learning, Exploratory data analysis, Statistics & Probability Cloud & Infrastructure Management – Network administration / Database administration / Windows administration / Linux administration / Middleware administration / End User Computing / ServiceNow Cloud platforms like AWS / GCP/ Azure / Oracle Cloud, Virtualization Cybersecurity Infra Security / Identity & Access Management / Application Security / Governance & Risk Compliance / Network Security Mainframe – COBOL, DB2, CICS, JCL Open source – Python, PHP, Unix / Linux, MySQL, Apache, HTML5, CSS3, JavaScript Show more Show less

Posted 1 week ago

Apply

8.0 - 11.0 years

6 - 9 Lacs

Noida

On-site

GlassDoor logo

Snowflake - Senior Technical Lead Full-time Company Description About Sopra Steria Sopra Steria, a major Tech player in Europe with 50,000 employees in nearly 30 countries, is recognised for its consulting, digital services and solutions. It helps its clients drive their digital transformation and obtain tangible and sustainable benefits. The Group provides end-to-end solutions to make large companies and organisations more competitive by combining in-depth knowledge of a wide range of business sectors and innovative technologies with a collaborative approach. Sopra Steria places people at the heart of everything it does and is committed to putting digital to work for its clients in order to build a positive future for all. In 2024, the Group generated revenues of €5.8 billion. The world is how we shape it. Job Description Position: Snowflake - Senior Technical Lead Experience: 8-11 years Location: Noida/ Bangalore Education: B.E./ B.Tech./ MCA Primary Skills: Snowflake, Snowpipe, SQL, Data Modelling, DV 2.0, Data Quality, AWS, Snowflake Security Good to have Skills: Snowpark, Data Build Tool, Finance Domain Experience with Snowflake-specific features: Snowpipe, Streams & Tasks, Secure Data Sharing. Experience in data warehousing, with at least 2 years focused on Snowflake. Hands-on expertise in SQL, Snowflake scripting (JavaScript UDFs), and Snowflake administration. Proven experience with ETL/ELT tools (e.g., dbt, Informatica, Talend, Matillion) and orchestration frameworks. Deep knowledge of data modeling techniques (star schema, data vault) and performance tuning. Familiarity with data security, compliance requirements, and governance best practices. Experience in Python, Scala, or Java for Snowpark development is good to have. Strong understanding of cloud platforms (AWS, Azure, or GCP) and related services (S3, ADLS, IAM) Key Responsibilities Define data partitioning, clustering, and micro-partition strategies to optimize performance and cost. Lead the implementation of ETL/ELT processes using Snowflake features (Streams, Tasks, Snowpipe). Automate schema migrations, deployments, and pipeline orchestration (e.g., with dbt, Airflow, or Matillion). Monitor query performance and resource utilization; tune warehouses, caching, and clustering. Implement workload isolation (multi-cluster warehouses, resource monitors) for concurrent workloads. Define and enforce role-based access control (RBAC), masking policies, and object tagging. Ensure data encryption, compliance (e.g., GDPR, HIPAA), and audit logging are correctly configured. Establish best practices for dimensional modeling, data vault architecture, and data quality. Create and maintain data dictionaries, lineage documentation, and governance standards. Partner with business analysts and data scientists to understand requirements and deliver analytics-ready datasets. Stay current with Snowflake feature releases (e.g., Snowpark, Native Apps) and propose adoption strategies. Contribute to the long-term data platform roadmap and cloud cost-optimization initiatives. Qualifications BTech/MCA Additional Information At our organization, we are committed to fighting against all forms of discrimination. We foster a work environment that is inclusive and respectful of all differences. All of our positions are open to people with disabilities.

Posted 1 week ago

Apply

0 years

0 Lacs

Calcutta

On-site

GlassDoor logo

Project Role : Technology Architect Project Role Description : Review and integrate all application requirements, including functional, security, integration, performance, quality and operations requirements. Review and integrate the technical architecture requirements. Provide input into final decisions regarding hardware, network products, system software and security. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Cloud Data Architecture Minimum 5 year(s) of experience is required Educational Qualification : BE or MCA Summary: As a Technology Architect, you will be responsible for reviewing and integrating all application requirements, including functional, security, integration, performance, quality, and operations requirements. Your typical day will involve reviewing and integrating technical architecture requirements, providing input into final decisions regarding hardware, network products, system software, and security, and utilizing Databricks Unified Data Analytics Platform to deliver impactful data-driven solutions. Roles & Responsibilities: 6 or more years of experience in implementing data ingestion pipelines from multiple sources and creating end to end data pipeline on Databricks platform. 2 or more years of experience using Python, PySpark or Scala. Experience in Databricks on cloud. Exp in any of AWS, Azure or GCPe, ETL, data engineering, data cleansing and insertion into a data warehouse Must have Skills like Databricks, Cloud Data Architecture, Python Programming Language, Data Engineering. Professional Attributes: Excellent writing, communication and presentation skills. Eagerness to learn and develop self on an ongoing basis. Excellent client facing and interpersonal skills. BE or MCA

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What you’ll be doing… We are looking for data engineers who can work with world class team members to help drive telecom business to its full potential . We are building data products / assets for telecom wireless and wireline business which includes consumer analytics, telecom network performance and service assurance analytics etc. We are working on cutting edge technologies like digital twin to build these analytical platforms and provide data support for varied AI ML implementations. As a data engineer you will be collaborating with business product owners , coaches , industry renowned data scientists and system architects to develop strategic data solutions from sources which includes batch, file and data streams As a Data Engineer with ETL/ELT expertise for our growing data platform & analytics teams, you will understand and enable the required data sets from different sources both structured and unstructured data into our data warehouse and data lake with real-time streaming and/or batch processing to generate insights and perform analytics for business teams within Verizon. Understanding the business requirements and the technical design. Working on Data Ingestion, Preparation and Transformation. Developing data streaming applications. Debugging the production failures and identifying the solution. Working on ETL/ELT development. Where You'll Be Working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. What we’re looking for... You’re curious about new technologies and the game-changing possibilities it creates. You like to stay up-to-date with the latest trends and apply your technical expertise to solving business problems. You’ll Need To Have Bachelor’s degree or one or more years of work experience. Experience with Data Warehouse concepts and Data Management life cycle. Experience in any DBMS Experience in Shell scripting, Spark, Scala. Knowledge in GCP/BigQuery. Even Better If You Have Two or more years of relevant experience. Any relevant Certification on ETL/ELT developer. Certification in GCP-Data Engineer. Accuracy and attention to detail. Good problem solving, analytical, and research capabilities. Good verbal and written communication. Experience presenting to and influence stakeholders. #AI&D Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Show more Show less

Posted 1 week ago

Apply

8.0 - 10.0 years

0 Lacs

Andhra Pradesh

On-site

GlassDoor logo

Review, analyse and evaluate business systems and user needs. Document requirements, define scope and objectives and formulate systems to parallel overall business strategies. Rely on experience and judgment to plan and accomplish goals. Write reusable, testable, and efficient code Work collaboratively with design team to understand end user requirements to provide technical solutions and for the implementation of new software features Determine operational objectives by studying business functions; gathering information; evaluating output requirements and formats Construct workflow charts and diagrams; studying system capabilities; writing specifications Improve systems by studying current practices; designing modifications Recommend controls by identifying problems; writing improved procedures Define project requirements by identifying project milestones, phases and elements; forming project team; establishing project budget Monitor project progress by tracking activity; resolving problems; publishing progress reports; recommending actions Maintain user confidence and protect operations by keeping information confidential Prepare technical reports by collecting, analyzing and summarizing information and trends Contribute to team effort by accomplishing related results as needed Validate resource requirements and develop cost estimate models Conduct and coordinate financial, product, market, operational and related research to support strategic and business planning within the various departments and programs of the client group Interpret, evaluate and interrelate research data and develop integrated business analyses and projections for incorporation into strategic decision-making Plan and coordinate the development of primary and secondary market research studies in support of strategic planning and specific marketing initiatives, as required and presents findings of studies to client committees Perform daily, weekly and monthly reviews and analyses of current processes using operational metrics and reports Review a variety of areas including operations, purchasing, inventory, distribution and facilities Understand and communicate the financial and operational impact of any changes Suggest changes to senior management using analytics to support your recommendations. Actively participate in the implementation of approved changes Create informative, actionable and repeatable reporting that highlights relevant business trends and opportunities for improvement. Conduct insightful, ad hoc analyses to investigate ongoing or one-time operational issues 8 - 10 years of Exp as Business Analysts, Data Analysts, Development mainly Spark/Scala Must be Self Driven and individually own the assignments e2e Must have experience in banking domain like Payments, Liquidity, Trade domains or commercial cards. Must have experience in Data analysis Experience in writing SQLs Experience with Agile projects, JIRA About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What You’ll Be Doing… We are looking for data engineers who can work with world class team members to help drive telecom business to its full potential. We are building data products / assets for telecom wireless and wireline business which includes consumer analytics, telecom network performance and service assurance analytics etc. We are working on cutting edge technologies like digital twin to build these analytical platforms and provide data support for varied AI ML implementations. As a data engineer you will be collaborating with business product owners, coaches, industry renowned data scientists and system architects to develop strategic data solutions from sources which includes batch, file and data streams As a Data Engineer with ETL/ELT expertise for our growing data platform & analytics teams, you will understand and enable the required data sets from different sources both structured and unstructured data into our data warehouse and data lake with real-time streaming and/or batch processing to generate insights and perform analytics for business teams within Verizon. Understanding the business requirements and the technical design. Working on Data Ingestion, Preparation and Transformation. Developing data streaming applications. Debugging the production failures and identifying the solution. Working on ETL/ELT development. What We’re Looking For... You’re curious about new technologies and the game-changing possibilities it creates. You like to stay up-to-date with the latest trends and apply your technical expertise to solving business problems. #AI&D You’ll need to have… Bachelor’s degree or one or more years of work experience. Experience with Data Warehouse concepts and Data Management life cycle. Experience in any DBMS Experience in Shell scripting, Spark, Scala. Experience in GCP, Cloud Composer and BigQuery Even better if you have one or more of the following Two or more years of relevant experience. Any relevant Certification on ETL/ELT developer. Certification in GCP-Data Engineer. Accuracy and attention to detail. Good problem solving, analytical, and research capabilities. Good verbal and written communication. Experience presenting to and influence stakeholders. Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology Your Role And Responsibilities Create Solution Outline and Macro Design to describe end to end product implementation in Data Platforms including, System integration, Data ingestion, Data processing, Serving layer, Design Patterns, Platform Architecture Principles for Data platform. Contribute to pre-sales, sales support through RfP responses, Solution Architecture, Planning and Estimation. Contribute to reusable components / asset / accelerator development to support capability development Participate in Customer presentations as Platform Architects / Subject Matter Experts on Big Data, Azure Cloud and related technologies Participate in customer PoCs to deliver the outcomes Participate in delivery reviews / product reviews, quality assurance and work as design authority Preferred Education Non-Degree Program Required Technical And Professional Expertise Experience in designing of data products providing descriptive, prescriptive, and predictive analytics to end users or other systems Experience in data engineering and architecting data platforms. Experience in architecting and implementing Data Platforms Azure Cloud Platform Experience on Azure cloud is mandatory (ADLS Gen 1 / Gen2, Data Factory, Databricks, Synapse Analytics, Azure SQL, Cosmos DB, Event hub, Snowflake), Azure Purview, Microsoft Fabric, Kubernetes, Terraform, Airflow Experience in Big Data stack (Hadoop ecosystem Hive, HBase, Kafka, Spark, Scala PySpark, Python etc.) with Cloudera or Hortonworks Preferred Technical And Professional Experience Experience in architecting complex data platforms on Azure Cloud Platform and On-Prem Experience and exposure to implementation of Data Fabric and Data Mesh concepts and solutions like Microsoft Fabric or Starburst or Denodo or IBM Data Virtualisation or Talend or Tibco Data Fabric Exposure to Data Cataloging and Governance solutions like Collibra, Alation, Watson Knowledge Catalog, dataBricks unity Catalog, Apache Atlas, Snowflake Data Glossary etc Show more Show less

Posted 1 week ago

Apply

8.0 - 10.0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

Review, analyse and evaluate business systems and user needs. Document requirements, define scope and objectives and formulate systems to parallel overall business strategies. Rely on experience and judgment to plan and accomplish goals. Write reusable, testable, and efficient code Work collaboratively with design team to understand end user requirements to provide technical solutions and for the implementation of new software features Determine operational objectives by studying business functions; gathering information; evaluating output requirements and formats Construct workflow charts and diagrams; studying system capabilities; writing specifications Improve systems by studying current practices; designing modifications Recommend controls by identifying problems; writing improved procedures Define project requirements by identifying project milestones, phases and elements; forming project team; establishing project budget Monitor project progress by tracking activity; resolving problems; publishing progress reports; recommending actions Maintain user confidence and protect operations by keeping information confidential Prepare technical reports by collecting, analyzing and summarizing information and trends Contribute to team effort by accomplishing related results as needed Validate resource requirements and develop cost estimate models Conduct and coordinate financial, product, market, operational and related research to support strategic and business planning within the various departments and programs of the client group Interpret, evaluate and interrelate research data and develop integrated business analyses and projections for incorporation into strategic decision-making Plan and coordinate the development of primary and secondary market research studies in support of strategic planning and specific marketing initiatives, as required and presents findings of studies to client committees Perform daily, weekly and monthly reviews and analyses of current processes using operational metrics and reports Review a variety of areas including operations, purchasing, inventory, distribution and facilities Understand and communicate the financial and operational impact of any changes Suggest changes to senior management using analytics to support your recommendations. Actively participate in the implementation of approved changes Create informative, actionable and repeatable reporting that highlights relevant business trends and opportunities for improvement. Conduct insightful, ad hoc analyses to investigate ongoing or one-time operational issues 8 - 10 years of Exp as Business Analysts, Data Analysts, Development mainly Spark/Scala Must be Self Driven and individually own the assignments e2e Must have experience in banking domain like Payments, Liquidity, Trade domains or commercial cards. Must have experience in Data analysis Experience in writing SQLs Experience with Agile projects, JIRA Show more Show less

Posted 1 week ago

Apply

2.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Full-time Career Site Team: Data Science & Business Intelligence Company Description NielsenIQ is a global measurement and data analytics company that provides the most complete and trusted view available of consumers and markets worldwide. We provide consumer packaged goods manufacturers/fast-moving consumer goods and retailers with accurate, actionable information and insights and a complete picture of the complex and changing marketplace that companies need to innovate and grow. Our approach marries proprietary NielsenIQ data with other data sources to help clients around the world understand what’s happening now, what’s happening next, and how to best act on this knowledge. We like to be in the middle of the action. That’s why you can find us at work in over 90 countries, covering more than 90% of the world’s population. For more information, visit www.niq.com. NielsenIQ is committed to hiring and retaining a diverse workforce. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Job Description About the job Help our clients-internal and external-understand and use RMS services better by understanding their requirements, queries, and helping address the same through knowledge of data science and RMS. Responsibilities Building knowledge of Nielsen suite of products and demonstrating the same Understanding client concerns Able to put forth ways and means of solving client concerns with supervision Automation and development of solutions for existing processes Taking initiative to understand concerns/problems in the RMS product and participating in product improvement initiatives About The Job Help our clients-internal and external-understand and use RMS services better by understanding their requirements, queries, and helping address the same through knowledge of data science and RMS. Responsibilities Building knowledge of Nielsen suite of products and demonstrating the same Understanding client concerns Able to put forth ways and means of solving client concerns with supervision Automation and development of solutions for existing processes Taking initiative to understand concerns/problems in the RMS product and participating in product improvement initiatives Qualifications Professionals with degrees in Maths, Data Science, Statistics, or related fields involving statistical analysis of large data sets 2-3 years of experience in market research or relevant field Mindset And Approach To Work Embraces change, innovation and iterative processes in order to continuously improve the products value to clients. Continuously collaborate & support to improve the product. Active interest in arriving at collaboration and consensus in communication plans, deliverables and deadlines Plans and completes assignments independently within an established framework, breaking down complex tasks, making reasonable decisions. Work is reviewed for overall technical soundness. Participates in data experiments and PoCs, setting measurable goals, timelines and reproducible outcomes. Applies critical thinking and takes initiative. Continuously reviews the latest industry innovations and effectively applies them to their work Consistently challenges and analyzes data to ensure accuracy. Functional Skills Ability to manipulate, analyze and interpret large data sources Experienced in high-level programming languages (f.e. Python, R, SQL, Scala), as well as with data visualization tools (e.g. Power BI, Spotfire, Tableau, MicroStrategy) Able to work in virtual environment. Familiar with git/Bitbucket processes People with at least some experience in RMS, NIQ, will have an advantage Can use a logical reasoning process to break down and work through increasingly challenging situations or problems to arrive at positive outcomes. Identify and use data from various sources to influence decisions Interpret effectively the data in relation to business objectives Soft Skills Ability to engage/communicate with team and extended team members Can adapt to change and new ideas or ways of working. Exhibits emotional intelligence when partnering with internal and external stakeholders Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion I'm interested I'm interested Privacy Policy Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

About the role You’ll thrive if you’re hands-on, grounded, and passionate about building with technology. Our diverse tech stack includes TypeScript, Java, Scala, Kotlin, Golang, Elixir, Python, .Net, Node.js, and Rust. This role offers significant impact and growth opportunities while staying hands-on. We focus on lean teams without traditional management layers, working in small, collaborative teams (2-5 people) where a well-founded argument holds more weight than the years of experience. You’ll develop tailored software solutions to meet clients’ unique needs across multiple domains. Responsibilities Remain fully hands-on and write high-quality, production-ready code that enables smooth deployment of solutions. Lead architecture and design decisions, ensuring adherence to best practices in technology choices and system design. Utilize DevOps tools and practices to automate and streamline the build and deployment processes. Work closely with Data Scientists and Engineers to deliver robust, production-level AI and Machine Learning systems. Develop frameworks and tools for efficient data ingestion from diverse and complex sources. Operate in short, iterative sprints, delivering working software aligned with clear deliverables and client-defined deadlines. Demonstrate flexibility by learning and working across multiple programming languages and technologies as required. Additional Responsibilities Actively contribute to a high-performing engineering culture by working from the office regularly to collaborate closely with fellow senior techies, fostering strong technical discussions and decision-making. Provide hands-on mentorship and technical guidance that encourages knowledge sharing, continuous improvement, and innovation within your team. Skills you’ll need 8+ years experience as a Software Engineer. Deep understanding of programming fundamentals and expertise with at least one programming language (functional or object-oriented). A nuanced and rich understanding of code quality, maintainability and practices like Test Driven Development. Experience with one or more source control and build toolchains. Working knowledge of CI/CD will be an added advantage. Understanding of web APIs, contracts and communication protocols. Understanding of Cloud platforms, infra-automation/DevOps, IaC/GitOps/Containers, design and development of large data platforms. A maker’s mindset – To be resourceful and have the ability to do things that have no instructions. What will you experience in terms of culture at Sahaj? A culture of trust, respect and transparency Opportunity to collaborate with some of the finest minds in the industry Work across multiple domains What are the benefits of being at Sahaj? Unlimited leaves Life Insurance & Private Health insurance paid by Sahaj Stock options No hierarchy Open Salaries Show more Show less

Posted 1 week ago

Apply

89.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Full-time Company Description GFK - Growth from Knowledge. For over 89 years, we have earned the trust of our clients around the world by solving critical questions in their decision-making process. We fuel their growth by providing a complete understanding of their consumers’ buying behavior, and the dynamics impacting their markets, brands and media trends. In 2023, GfK combined with NIQ, bringing together two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights - delivered with advanced analytics through state-of-the-art platforms - GfK drives “Growth from Knowledge”. Job Description It's an exciting time to be a builder. Constant technological advances are creating an exciting new world for those who understand the value of data. The mission of NIQ’s Media Division is to turn NIQ into the global leader that transforms how consumer brands plan, activate and measure their media activities. Recombine is the delivery area focused on maximising the value of data assets in our NIQ Media Division. We apply advanced statistical and machine learning techniques to unlock deeper insights, whilst integrating data from multiple internal and external sources. Our teams develop data integration products across various markets and product areas, delivering enriched datasets that power client decision-making. Role Overview We are looking for a Principal Software Engineer for our Recombine delivery area to provide technical leadership within our development teams, ensuring best practices, architectural coherence, and effective collaboration across projects. This role is ideal for a highly experienced engineer who can bridge the gap between data engineering, data science, and software engineering, helping teams build scalable, maintainable, and well-structured data solutions. As a Principal Software Engineer, you will play a hands-on role in designing and implementing solutions while mentoring developers, influencing technical direction, and driving best practices in software and data engineering. This role includes line management responsibilities, ensuring the growth and development of team members. The role will be working within an AWS environment, leveraging the power of cloud-native technologies and modern data platforms Key Responsibilities Technical Leadership & Architecture Act as a technical architect, ensuring alignment between the work of multiple development teams in data engineering and data science. Design scalable, high-performance data processing solutions within AWS, considering factors such as governance, security, and maintainability. Drive the adoption of best practices in software development, including CI/CD, testing strategies, and cloud-native architecture. Work closely with Product Owners to translate business needs into technical solutions. Hands-on Development & Technical Excellence Lead by example through high-quality coding, code reviews, and proof-of-concept development. Solve complex engineering problems and contribute to critical design decisions. Ensure effective use of AWS services, including AWS Glue, AWS Lambda, Amazon S3, Redshift, and EMR. Develop and optimise data pipelines, data transformations, and ML workflows in a cloud environment. Line Management & Team Development Provide line management to engineers, ensuring their professional growth and development. Conduct performance reviews, set development goals, and mentor team members to enhance their skills. Foster a collaborative and high-performing engineering culture, promoting knowledge sharing and continuous improvement beyond team boundaries. Support hiring, onboarding, and career development initiatives within the engineering team. Collaboration & Cross-Team Coordination Act as the technical glue between data engineers, data scientists, and software developers, ensuring smooth integration of different components. Provide mentorship and guidance to developers, helping them level up their skills and technical understanding. Work with DevOps teams to improve deployment pipelines, observability, and infrastructure as code. Engage with stakeholders across the business, translating technical concepts into business-relevant insights. Governance, Security & Data Best Practices Champion data governance, lineage, and security across the platform. Advocate for and implement scalable data architecture patterns, such as Data Mesh, Lakehouse, or event-driven pipelines. Ensure compliance with industry standards, internal policies, and regulatory requirements. Qualifications Requirements & Experience Strong software engineering background with experience in designing and building production-grade applications in Python, Scala, Java, or similar languages. Proven experience with AWS-based data platforms, specifically AWS Glue, Redshift, Athena, S3, Lambda, and EMR. Expertise in Apache Spark and AWS Lake Formation, with experience building large-scale distributed data pipelines. Experience with workflow orchestration tools like Apache Airflow or AWS Step Functions. Cloud experience in AWS, including containerisation (Docker, Kubernetes, ECS, EKS) and infrastructure as code (Terraform, CloudFormation). Strong knowledge of modern software architecture, including microservices, event-driven systems, and distributed computing. Experience leading teams in an agile environment, with a strong understanding of CI/CD pipelines, automated testing, and DevOps practices. Excellent problem-solving and communication skills, with the ability to engage with both technical and non-technical stakeholders. Proven line management experience, including mentoring, career development, and performance management of engineering teams. Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion I'm interested I'm interested Privacy Policy Show more Show less

Posted 1 week ago

Apply

2.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Company Description NielsenIQ is a global measurement and data analytics company that provides the most complete and trusted view available of consumers and markets worldwide. We provide consumer packaged goods manufacturers/fast-moving consumer goods and retailers with accurate, actionable information and insights and a complete picture of the complex and changing marketplace that companies need to innovate and grow. Our approach marries proprietary NielsenIQ data with other data sources to help clients around the world understand what’s happening now, what’s happening next, and how to best act on this knowledge. We like to be in the middle of the action. That’s why you can find us at work in over 90 countries, covering more than 90% of the world’s population. For more information, visit www.niq.com. NielsenIQ is committed to hiring and retaining a diverse workforce. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Job Description About the job Help our clients-internal and external-understand and use RMS services better by understanding their requirements, queries, and helping address the same through knowledge of data science and RMS. Responsibilities Building knowledge of Nielsen suite of products and demonstrating the same Understanding client concerns Able to put forth ways and means of solving client concerns with supervision Automation and development of solutions for existing processes Taking initiative to understand concerns/problems in the RMS product and participating in product improvement initiatives About The Job Help our clients-internal and external-understand and use RMS services better by understanding their requirements, queries, and helping address the same through knowledge of data science and RMS. Responsibilities Building knowledge of Nielsen suite of products and demonstrating the same Understanding client concerns Able to put forth ways and means of solving client concerns with supervision Automation and development of solutions for existing processes Taking initiative to understand concerns/problems in the RMS product and participating in product improvement initiatives Qualifications Professionals with degrees in Maths, Data Science, Statistics, or related fields involving statistical analysis of large data sets 2-3 years of experience in market research or relevant field Mindset and Approach to work: Embraces change, innovation and iterative processes in order to continuously improve the products value to clients Continuously collaborate & support to improve the product Active interest in arriving at collaboration and consensus in communication plans, deliverables and deadlines Plans and completes assignments independently within an established framework, breaking down complex tasks, making reasonable decisions. Work is reviewed for overall technical soundness. Participates in data experiments and PoCs, setting measurable goals, timelines and reproducible outcomes. Applies critical thinking and takes initiative Continuously reviews the latest industry innovations and effectively applies them to their work Consistently challenges and analyzes data to ensure accuracy Functional Skills: Ability to manipulate, analyze and interpret large data sources Experienced in high-level programming languages (f.e. Python, R, SQL, Scala), as well as with data visualization tools (e.g. Power BI, Spotfire, Tableau, MicroStrategy) Able to work in virtual environment. Familiar with git/Bitbucket processes People with at least some experience in RMS, NIQ, will have an advantage Can use a logical reasoning process to break down and work through increasingly challenging situations or problems to arrive at positive outcomes Identify and use data from various sources to influence decisions Interpret effectively the data in relation to business objectives Soft Skills Ability to engage/communicate with team and extended team members Can adapt to change and new ideas or ways of working Exhibits emotional intelligence when partnering with internal and external stakeholders Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Description: Graviton Research Capital LLP, Gurgaon is looking to hire Software Engineers for our Core Technology team which has some of the best programmers in India working on cutting edge technologies to build a super fast and robust trading infrastructure handling millions of dollars worth of trading transactions every day. As a Senior Software Engineer with Graviton your responsibilities will include: Designing and implementing a high-frequency automated trading system, that trades on multiple exchanges Building live reporting and administration tools for the trading system Performance optimization and improving the overall latency of systems, through algorithm research and using cutting edge tools and techniques End-to-end ownership of modules, including designing, development, deployment and support Growing the team through involvement in the regular hiring process and occasional campus recruitments Requirements : The ideal requirements for our candidates are: A degree in Computer Science 3-5 yrs Experience with C/C++ and object-oriented programming Experience in HFT industry Expertise in algorithms and data structures Excellent problem solving skills Strong communication skills A working knowledge of Linux systems Any of the following is a plus: A good understanding of TCP/IP and Ethernet Knowledge of any other programming language e.g. Java, Scala, Python, bash, Lisp, etc. Familiarity with parallel programming models and parallel algorithms Experience with big data environments e.g. Hadoop, Spark etc. Benefits: Our open and collaborative work culture gives you the freedom to innovate and experiment. Our cubicle free offices, non-hierarchical work culture and insistence to hire the very best creates a melting pot for great ideas and technological innovations. Everyone on the team is approachable, there is nothing better than working with friends! Our perks have you covered. Competitive compensation Annual international team outing Fully covered commuting expenses Best-in-class health insurance Delightful catered breakfasts and lunches A well-stocked kitchen 4 week annual leaves along with market holidays Gym and sports club memberships Regular social events and clubs After work parties Show more Show less

Posted 1 week ago

Apply

8.0 - 13.0 years

15 - 25 Lacs

Bengaluru

Hybrid

Naukri logo

We are looking for an enthusiastic and technology-proficient Big Data Engineer, who is eager to participate in the design and implementation of a top-notch Big Data solution to be deployed at massive scale. Our customer is one of the world's largest technology companies based in Silicon Valley with operations all over the world. On this project we are working on the bleeding-edge of Big Data technology to develop high performance data analytics platform, which handles petabytes datasets. Essential functions Participate in design and development of Big Data analytical applications. Design, support and continuously enhance the project code base, continuous integration pipeline, etc. Write complex ETL processes and frameworks for analytics and data management. Implement large-scale near real-time streaming data processing pipelines. Work inside the team of industry experts on the cutting edge Big Data technologies to develop solutions for deployment at massive scale. Qualifications Strong coding experience with Scala, Spark,Hive, Hadoop. In-depth knowledge of Hadoop and Spark, experience with data mining and stream processing technologies (Kafka, Spark Streaming, Akka Streams). Understanding of the best practices in data quality and quality engineering. Experience with version control systems, Git in particular. Desire and ability for quick learning of new tools and technologies. Would be a plus Knowledge of Unix-based operating systems (bash/ssh/ps/grep etc.). Experience with Github-based development processes. Experience with JVM build systems (SBT, Maven, Gradle). We offer Opportunity to work on bleeding-edge projects Work with a highly motivated and dedicated team Competitive salary Flexible schedule Benefits package - medical insurance, sports Corporate social events Professional development opportunities Well-equipped office About us Grid Dynamics (NASDAQ: GDYN) is a leading provider of technology consulting, platform and product engineering, AI, and advanced analytics services. Fusing technical vision with business acumen, we solve the most pressing technical challenges and enable positive business outcomes for enterprise companies undergoing business transformation. A key differentiator for Grid Dynamics is our 8 years of experience and leadership in enterprise AI, supported by profound expertise and ongoing investment in data, analytics, cloud & DevOps, application modernization and customer experience. Founded in 2006, Grid Dynamics is headquartered in Silicon Valley with offices across the Americas, Europe, and India.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Experience: 5+ Years Role Overview: Responsible for designing, building, and maintaining scalable data pipelines and architectures. This role requires expertise in SQL, ETL frameworks, big data technologies, cloud services, and programming languages to ensure efficient data processing, storage, and integration across systems. Requirements: • Minimum 5+ years of experience as a Data Engineer or similar data-related role. • Strong proficiency in SQL for querying databases and performing data transformations. • Experience with data pipeline frameworks (e.g., Apache Airflow, Luigi, or custom-built solutions). • Proficiency in at least one programming language such as Python, Java, or Scala for data processing tasks. • Experience with cloud-based data services and Datalakes (e.g., Snowflake, Databricks, AWS S3, GCP BigQuery, or Azure Data Lake). • Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka). • Experience with ETL tools (e.g., Talend, Apache NiFi, SSIS, etc.) and data integration techniques. • Knowledge of data warehousing concepts and database design principles. • Good understanding of NoSQL and Big Data Technologies like MongoDB, Cassandra, Spark, Hadoop, Hive, • Experience with data modeling and schema design for OLAP and OLTP systems. • Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes). Educational Qualification: Bachelor’s/Master’s degree in computer science, Information Technology, or a related field. Show more Show less

Posted 1 week ago

Apply

Exploring Scala Jobs in India

Scala is a popular programming language that is widely used in India, especially in the tech industry. Job seekers looking for opportunities in Scala can find a variety of roles across different cities in the country. In this article, we will dive into the Scala job market in India and provide valuable insights for job seekers.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Chennai
  5. Mumbai

These cities are known for their thriving tech ecosystem and have a high demand for Scala professionals.

Average Salary Range

The salary range for Scala professionals in India varies based on experience levels. Entry-level Scala developers can expect to earn around INR 6-8 lakhs per annum, while experienced professionals with 5+ years of experience can earn upwards of INR 15 lakhs per annum.

Career Path

In the Scala job market, a typical career path may look like: - Junior Developer - Scala Developer - Senior Developer - Tech Lead

As professionals gain more experience and expertise in Scala, they can progress to higher roles with increased responsibilities.

Related Skills

In addition to Scala expertise, employers often look for candidates with the following skills: - Java - Spark - Akka - Play Framework - Functional programming concepts

Having a good understanding of these related skills can enhance a candidate's profile and increase their chances of landing a Scala job.

Interview Questions

Here are 25 interview questions that you may encounter when applying for Scala roles:

  • What is Scala and why is it used? (basic)
  • Explain the difference between val and var in Scala. (basic)
  • What is pattern matching in Scala? (medium)
  • What are higher-order functions in Scala? (medium)
  • How does Scala support functional programming? (medium)
  • What is a case class in Scala? (basic)
  • Explain the concept of currying in Scala. (advanced)
  • What is the difference between map and flatMap in Scala? (medium)
  • How does Scala handle null values? (medium)
  • What is a trait in Scala and how is it different from an abstract class? (medium)
  • Explain the concept of implicits in Scala. (advanced)
  • What is the Akka toolkit and how is it used in Scala? (medium)
  • How does Scala handle concurrency? (advanced)
  • Explain the concept of lazy evaluation in Scala. (advanced)
  • What is the difference between List and Seq in Scala? (medium)
  • How does Scala handle exceptions? (medium)
  • What are Futures in Scala and how are they used for asynchronous programming? (advanced)
  • Explain the concept of type inference in Scala. (medium)
  • What is the difference between object and class in Scala? (basic)
  • How can you create a Singleton object in Scala? (basic)
  • What is a higher-kinded type in Scala? (advanced)
  • Explain the concept of for-comprehensions in Scala. (medium)
  • How does Scala support immutability? (medium)
  • What are the advantages of using Scala over Java? (basic)
  • How do you implement pattern matching in Scala? (medium)

Closing Remark

As you explore Scala jobs in India, remember to showcase your expertise in Scala and related skills during interviews. Prepare well, stay confident, and you'll be on your way to a successful career in Scala. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies