Jobs
Interviews

6093 Scala Jobs - Page 36

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 8.0 years

10 - 14 Lacs

Chennai

Work from Office

Role Description Provides leadership for the overall architecture, design, development, and deployment of a full-stack cloud native data analytics platform. Designing & Augmenting Solution architecture for Data Ingestion, Data Preparation, Data Transformation, Data Load, ML & Simulation Modelling, Java BE & FE, State Machine, API Management & Intelligence consumption using data products, on cloud Understand Business Requirements and help in developing High level and Low-level Data Engineering and Data Processing Documentation for the cloud native architecture Developing conceptual, logical and physical target-state architecture, engineering and operational specs. Work with the customer, users, technical architects, and application designers to define the solution requirements and structure for the platform Model and design the application data structure, storage, and integration Lead the database analysis, design, and build effort Work with the application architects and designers to design the integration solution Ensure that the database designs fulfill the requirements, including data volume, frequency needs, and long-term data growth Able to perform Data Engineering tasks using Spark Knowledge of developing efficient frameworks for development and testing using (Sqoop/Nifi/Kafka/Spark/Streaming/ WebHDFS/Python) to enable seamless data ingestion processes on to the Hadoop/BigQuery platforms. Enabling Data Governance and Data Discovery Exposure of Job Monitoring framework along validations automation Exposure of handling structured, Un Structured and Streaming data. Technical Skills Experience with building data platform on cloud (Data Lake, Data Warehouse environment, Databricks) Strong technical understanding of data modeling, design and architecture principles and techniques across master data, transaction data and derived/analytic data Proven background of designing and implementing architectural solutions which solve strategic and tactical business needs Deep knowledge of best practices through relevant experience across data-related disciplines and technologies, particularly for enterprise-wide data architectures, data management, data governance and data warehousing Highly competent with database design Highly competent with data modeling Strong Data Warehousing and Business Intelligence skills or including: Handling ELT and scalability issues for enterprise level data warehouse Creating ETLs/ELTs to handle data from various data sources and various formats Strong hands-on experience of programming language like Python, Scala with Spark and Beam. Solid hands-on and Solution Architecting experience in Cloud Technologies Aws, Azure and GCP (GCP preferred) Hands on working experience of data processing at scale with event driven systems, message queues (Kafka/ Flink/Spark Streaming) Hands on working Experience with GCP Services like BigQuery, DataProc, PubSub, Dataflow, Cloud Composer, API Gateway, Datalake, BigTable, Spark, Apache Beam, Feature Engineering/Data Processing to be used for Model development Experience gathering and processing raw data at scale (including writing scripts, web scraping, calling APIs, write SQL queries, etc.) Experience building data pipelines for structured/unstructured, real-time/batch, events/synchronous/ asynchronous using MQ, Kafka, Steam processing Hands-on working experience in analyzing source system data and data flows, working with structured and unstructured data Must be very strong in writing SparkSQL queries Strong organizational skills, with the ability to work autonomously as well as leading a team Pleasant Personality, Strong Communication & Interpersonal Skills Qualifications A bachelor's degree in computer science, computer engineering, or a related discipline is required to work as a technical lead Certification in GCP would be a big plus Individuals in this field can further display their leadership skills by completing the Project Management Professional certification offered by the Project Management Institute.

Posted 2 weeks ago

Apply

5.0 - 8.0 years

11 - 16 Lacs

Gurugram

Work from Office

Role Description Role Description: Senior Scala Data Engineer Scala Data Engineer needs to be able to understand existing code and help refactor, and migrate into new environment. Role and responsibilities * Read existing scala spark code. * Create unit tests for scala spark code * Enhance and Write scala spark code. * Proficient in working with S3 file with csv and parquet format. * Proficient in working with mongodb. Building up environments independently to test assigned work, Execute manual and automated tests. Experience with enterprise tools, like Git, Azure, TFS. Experience with JIRA or similar defect tracking tool. Engage and participate on an Agile team of a world-class software developers. Apply independence and creativity to problem solving across project assignments. Effectively communicate with team members, project managers and clients, as required. Core Skills: Scala Spark AWS Glue AWS Step Functions Maven Terraform Technical Skills Technical skills requirements The candidate must demonstrate proficiency in, Reading and writing scala spark code. Good programming knowledge using Scala and Python. SQL & BDD framework knowledge Experience in aws stack like S3, Glue, Step Functions * Experience in Agile/Scrum development Full SDLC from development to production deployment. Good Comm Skills.

Posted 2 weeks ago

Apply

3.0 years

30 - 40 Lacs

Noida, Uttar Pradesh, India

On-site

About Us CLOUDSUFI, a Google Cloud Premier Partner , a Data Science and Product Engineering organization building Products and Solutions for Technology and Enterprise industries. We firmly believe in the power of data to transform businesses and make better decisions. We combine unmatched experience in business processes with cutting edge infrastructure and cloud services. We partner with our customers to monetize their data and make enterprise data dance. Our Values We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community. Equal Opportunity Statement CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace. Please explore more at https://www.cloudsufi.com/. Position Overview Seeking an experienced Data Engineer to design, develop, and productionize graph database solutions using Neo4j for economic data analysis and modeling. This role requires expertise in graph database architecture, data pipeline development, and production system deployment. Key Responsibilities Graph Database Development Design and implement Neo4j graph database schemas for complex economic datasets Develop efficient graph data models representing economic relationships, transactions, and market dynamics Create and optimize Cypher queries for complex analytical workloads Build graph-based data pipelines for real-time and batch processing Data Engineering & Pipeline Development Architect scalable data ingestion frameworks for structured and unstructured economic data Develop ETL/ELT processes to transform relational and time-series data into graph formats Implement data validation, quality checks, and monitoring systems Build APIs and services for graph data access and manipulation Production Systems & Operations Deploy and maintain Neo4j clusters in production environments Implement backup, disaster recovery, and high availability solutions Monitor database performance, optimize queries, and manage capacity planning Establish CI/CD pipelines for graph database deployments Economic Data Specialization Model financial market relationships, economic indicators, and trading networks Create graph representations of supply chains, market structures, and economic flows Develop graph analytics for fraud detection, risk assessment, and market analysis Collaborate with economists and analysts to translate business requirements into graph solutions Required Qualifications Technical Skills: **Neo4j Expertise**: 3+ years hands-on experience with Neo4j database development **Graph Modeling**: Strong understanding of graph theory and data modeling principles **Cypher Query Language**: Advanced proficiency in writing complex Cypher queries **Programming**: Python, Java, or Scala for data processing and application development **Data Pipeline Tools**: Experience with Apache Kafka, Apache Spark, or similar frameworks **Cloud Platforms**: AWS, GCP, or Azure with containerization (Docker, Kubernetes) Database & Infrastructure Experience with graph database administration and performance tuning Knowledge of distributed systems and database clustering Understanding of data warehousing concepts and dimensional modeling Familiarity with other databases (PostgreSQL, MongoDB, Elasticsearch) Economic Data Experience Experience working with financial datasets, market data, or economic indicators Understanding of financial data structures and regulatory requirements Knowledge of data governance and compliance in financial services Preferred Qualifications **Neo4j Certification**: Neo4j Certified Professional or Graph Data Science certification **Advanced Degree**: Master's in Computer Science, Economics, or related field **Industry Experience**: 5+ years in financial services, fintech, or economic research **Additional Skills**: Machine learning on graphs, network analysis, time-series analysis Technical Environment Neo4j Enterprise Edition with APOC procedures Apache Kafka for streaming data ingestion Apache Spark for large-scale data processing Docker and Kubernetes for containerized deployments Git, Jenkins/GitLab CI for version control and deployment Monitoring tools: Prometheus, Grafana, ELK stack Application Requirements Portfolio demonstrating Neo4j graph database projects Examples of production graph systems you've built Experience with economic or financial data modeling preferred Skills:- Graph Databases, Neo4J and Python

Posted 2 weeks ago

Apply

5.0 - 10.0 years

14 - 24 Lacs

Bengaluru

Hybrid

Greeting from Altimetrik We are looking for a highly skilled and experienced C# Developer join our dynamic team. The ideal candidate will have a strong background in C#, WPF Technical Skills & Qualifications: Should be strong experience in SQL, ETL , Spark ,Hive, Data Ware House/Datamart design Strong Experience in Python,Pyspark Srong in Java or Scala is mandatory Good in Shell scripting Experience in AWS or Azure. Educational Qualification: Bachelors degree in Engineering or Masters degree . Exp : 5 to 9 yrs Mandatory Skills : Sql, ETL, (Python/PySpark) + (Scala / Java) , Aws/Azure Notice period : Immediate joiner or Serving notice period If interested , Please share the below details in mail to reach you Email id :sranganathan11494@altimetrik.com Total years of experience: Experience relevant to SQL: Relevant experience in ET: Relevant experience in Pyspark: Relevant experience in Scala: Relevant experience in Java : Current CTC : Expected CTC: Notice Period: Company name: Contact No: Contact email id: Current Location : Preferred Location : Are you willing to work2 days Work from office ( Bangalore): Thanks R Sasikala

Posted 2 weeks ago

Apply

5.0 - 10.0 years

11 - 15 Lacs

Chennai

Work from Office

Project description You'll be working in the GM Business Analytics team located in Pune. The successful candidate will be a member of the global Distribution team, which has team members in London and Pune. We work as part of a global team providing analytical solutions for IB distribution/sales people. Solutions deployed should be extensible globally with minimal localization. Responsibilities Are you passionate about data and analyticsAre you keen to be part of the journey to modernize a data warehouse/ analytics suite of application(s). Do you take pride in the quality of software delivered for each development iteration We're looking for someone like that to join us and be a part of a high-performing team on a high-profile project. solve challenging problems in an elegant way master state-of-the-art technologies build a highly responsive and fast updating application in an Agile & Lean environment apply best development practices and effectively utilize technologies work across the full delivery cycle to ensure high-quality delivery write high-quality code and adhere to coding standards work collaboratively with diverse team(s) of technologists You are Curious and collaborative, comfortable working independently, as well as in a team Focused on delivery to the business Strong in analytical skills. For example, the candidate must understand the key dependencies among existing systems in terms of the flow of data among them. It is essential that the candidate learns to understand the 'big picture' of how IB industry/business functions. Able to quickly absorb new terminology and business requirements Already strong in analytical tools, technologies, platforms, etc. The candidate must also demonstrate a strong desire for learning and self-improvement. Open to learning home-grown technologies, support current state infrastructure and help drive future state migrations. imaginative and creative with newer technologies Able to accurately and pragmatically estimate the development effort required for specific objectives You will have the opportunity to work under minimal supervision to understand local and global system requirements, design and implement the required functionality/bug fixes/enhancements. You will be responsible for components that are developed across the whole team and deployed globally. You will also have the opportunity to provide third-line support to the application's global user community, which will include assisting dedicated support staff and liaising with the members of other development teams directly, some of which will be local and some remote. Skills Must have A bachelor's or master's degree, preferably in Information Technology or a related field (computer science, mathematics, etc.), focusing on data engineering. 5+ years of relevant experience as a data engineer in Big Data is required. Strong Knowledge of programming languages (Python / Scala) and Big Data technologies (Spark, Databricks or equivalent) is required. Strong experience in executing complex data analysis and running complex SQL/Spark queries. Strong experience in building complex data transformations in SQL/Spark. Strong knowledge of Database technologies is required. Strong knowledge of Azure Cloud is advantageous. Good understanding and experience with Agile methodologies and delivery. Strong communication skills with the ability to build partnerships with stakeholders. Strong analytical, data management and problem-solving skills. Nice to have Experience working on the QlikView tool Understanding of QlikView scripting and data model

Posted 2 weeks ago

Apply

5.0 - 10.0 years

8 - 13 Lacs

Bengaluru

Work from Office

Lead Software Engineer Backend Were seeking a Lead Software Engineer to join one of our Data Layer teams. As the name implies, the Data Layer is at the core of all things data at Zeta. Our responsibilities include: Developing and maintaining the Zeta Identity Graph platform, which collects billions of behavioural, demographic, environmental, and transactional signals to power people-based marketing. Ingesting vast amounts of identity and event data from our customers and partners. Facilitating data transfers across systems. Ensuring the integrity and health of our datasets. And much more. As a member of this team, the data engineer will be responsible for designing and expanding our existing data infrastructure, enabling easy access to data, supporting complex data analyses, and automating optimization workflows for business and marketing operations. Essential Responsibilities: As a Lead Software Engineer, your responsibilities will include: Building, refining, tuning, and maintaining our real-time and batch data infrastructure Daily use technologies such as Python, Spark, Airflow, Snowflake, Hive, Scylla, Django, FastAPI, etc. Maintaining data quality and accuracy across production data systems Working with Data Engineers to optimize data models and workflows Working with Data Analysts to develop ETL processes for analysis and reporting Working with Product Managers to design and build data products Working with our DevOps team to scale and optimize our data infrastructure Participate in architecture discussions, influence the road map, take ownership and responsibility over new projects Participating in on-call rotation in their respective time zones (be available by phone or email in case something goes wrong) Desired Characteristics: Minimum 5 years of software engineering experience. Proven long term experience and enthusiasm for distributed data processing at scale, eagerness to learn new things. Expertise in designing and architecting distributed low latency and scalable solutions in either cloud and onpremises environment. Exposure to the whole software development lifecycle from inception to production and monitoring Fluency in Python or solid experience in Scala, Java Proficient with relational databases and Advanced SQL Expert in usage of services like Spark and Hive Experience with web frameworks such as Flask, Djang Experience in adequate usage of any scheduler such as Apache Airflow, Apache Luigi, Chronos etc. Experience in adequate usage of cloud services (AWS) at scale Experience in agile software development processes Excellent interpersonal and communication skills Nice to have: Experience with large scale / multi-tenant distributed systems Experience with columnar / NoSQL databases Vertica, Snowflake, HBase, Scylla, Couchbase Experience in real team streaming frameworks Flink, Storm Experience in open table formats such as Iceberg, Hudi or Deltalake

Posted 2 weeks ago

Apply

2.0 - 4.0 years

4 - 8 Lacs

Hyderabad

Work from Office

CDP ETL & Database Engineer The CDP ETL & Database Engineer will specialize in architecting, designing, and implementing solutions that are sustainable and scalable. The ideal candidate will understand CRM methodologies, with an analytical mindset, and a background in relational modeling in a Hybrid architecture. The candidate will help drive the business towards specific technical initiatives and will work closely with the Solutions Management, Delivery, and Product Engineering teams. The candidate will join a team of developers across the US, India & Costa Rica. Responsibilities : ETL Development The CDP ETL & Database Engineer will be responsible for building pipelines to feed downstream data processes. They will be able to analyze data, interpret business requirements, and establish relationships between data sets. The ideal candidate will be familiar with different encoding formats and file layouts such as JSON and XML. I mplementations & Onboarding Will work with the team to onboard new clients onto the ZMP/CDP+ platform. The candidate will solidify business requirements, perform ETL file validation, establish users, perform complex aggregations, and syndicate data across platforms. The hands-on engineer will take a test-driven approach towards development and will be able to document processes and workflows. Incremental Change Requests The CDP ETL & Database Engineer will be responsible for analyzing change requests and determining the best approach towards implementation and execution of the request. This requires the engineer to have a deep understanding of the platform's overall architecture. Change requests will be implemented and tested in a development environment to ensure their introduction will not negatively impact downstream processes. Change Data Management The candidate will adhere to change data management procedures and actively participate in CAB meetings where change requests will be presented and approved. Prior to introducing change, the engineer will ensure that processes are running in a development environment. The engineer will be asked to do peer-to-peer code reviews and solution reviews before production code deployment. Collaboration & Process Improvement The engineer will be asked to participate in knowledge share sessions where they will engage with peers, discuss solutions, best practices, overall approach, and process. The candidate will be able to look for opportunities to streamline processes with an eye towards building a repeatable model to reduce implementation duration. Job Requirements : The CDP ETL & Database Engineer will be well versed in the following areas: Relational data modeling ETL and FTP concepts Advanced Analytics using SQL Functions Cloud technologies - AWS, Snowflake Able to decipher requirements, provide recommendations, and implement solutions within predefined timeframes. The ability to work independently, but at the same time, the individual will be called upon to contribute in a team setting. The engineer will be able to confidently communicate status, raise exceptions, and voice concerns to their direct manager. Participate in internal client project status meetings with the Solution/Delivery management teams. When required, collaborate with the Business Solutions Analyst (BSA) to solidify requirements. Ability to work in a fast paced, agile environment; the individual will be able to work with a sense of urgency when escalated issues arise. Strong communication and interpersonal skills, ability to multitask and prioritize workload based on client demand. Familiarity with Jira for workflow mgmt., and time allocation. Familiarity with Scrum framework, backlog, planning, sprints, story points, retrospectives etc. Required Skills : ETL ETL tools such as Talend (Preferred, not required) DMExpress Nice to have Informatica Nice to have Database - Hands on experience with the following database Technologies Snowflake (Required) MYSQL/PostgreSQL Nice to have Familiar with NOSQL DB methodologies (Nice to have) Programming Languages Can demonstrate knowledge of any of the following. PLSQL JavaScript Strong Plus Python - Strong Plus Scala - Nice to have AWS Knowledge of the following AWS services: S3 EMR (Concepts) EC2 (Concepts) Systems Manager / Parameter Store Understands JSON Data structures, key value pair. Working knowledge of Code Repositories such as GIT, Win CVS, SVN. Workflow management tools such as Apache Airflow, Kafka, Automic/Appworx Jira Minimum Qualifications Bachelor's degree or equivalent 2-4 Years' experience Excellent verbal & written communications skills Self-Starter, highly motivated Analytical mindset.

Posted 2 weeks ago

Apply

4.0 - 9.0 years

9 - 14 Lacs

Bengaluru

Work from Office

Minimum of 4+ years of software development experience with demonstrated expertise in standard development best practice methodologies SKILLS REQUIRED: Spark, Scala, Python, HDFS, Hive, , Scheduler ( Ozzie, Airflow),Kafka Spark/Scala SQL RDBMS DOCKER KUBERNETES RABBITMQ/KAFKA MONITORING TOOLS - SPLUNK OR ELK Profile required Integrate test frameworks in development process Refactor existing solutions to make it reusable and scalable - Work with operations to get the solutions deployed Take ownership of production deployment of code Collaborating with and/or lead cross functional teams, build and launch applications and data platforms at scale, either for revenue generating or operational purposes *Come up with Coding and Design best practices *Thrive in self-motivated internal-innovation driven environment Adapting fast to new application knowledge and changes

Posted 2 weeks ago

Apply

4.0 - 7.0 years

13 - 18 Lacs

Bengaluru

Work from Office

Design, development and testing of components / modules in TOP (Trade Open Platform) involving Spark, Java, Hive and related big-data technologies in a datalake architecture Contribute to the design, development and deployment of new features new components in Azure public cloud Contribute to the evolution of REST APIs in TOP enhancement, development and testing of new APIs Ensure the processes in TOP provide an optimal performance and assist in performance tuning and optimization Release Deployment Deploy using CD/CI practices and tools in various environments development, UAT and production and follow production processes. Ensure Craftsmanship practices are followed Follow Agile at Scale process in terms of participation in PI Planning and follow-up, Sprint planning, Back-log maintenance in Jira. Organize training sessions on the core platform and related technologies for the Tribe / Business line to ensure the platform evolution is continuously updated to relevant stakeholders

Posted 2 weeks ago

Apply

4.0 - 6.0 years

13 - 18 Lacs

Bengaluru

Work from Office

Design, development and testing of components / modules in TOP (Trade Open Platform) involving Spark, Java, Hive and related big-data technologies in a datalake architecture Contribute to the design, development and deployment of new features new components in Azure public cloud Contribute to the evolution of REST APIs in TOP enhancement, development and testing of new APIs Ensure the processes in TOP provide an optimal performance and assist in performance tuning and optimization Release Deployment Deploy using CD/CI practices and tools in various environments development, UAT and production and follow production processes. Ensure Craftsmanship practices are followed Follow Agile at Scale process in terms of participation in PI Planning and follow-up, Sprint planning, Back-log maintenance in Jira. Organize training sessions on the core platform and related technologies for the Tribe / Business line to ensure the platform evolution is continuously updated to relevant stakeholders Around 4-6 years of experience in IT industry, preferably banking domain Expertise and experience in Java (java 1.8 (building API, Java thread, collections, Streaming, dependency injection/inversion), Junit, Big-data (Spark, Oozie, Hive) and Azure (AKS, CLI, Event, Key valut) and should have been part of digital transformation initiatives with knowledge of Unix, SQL/RDBMS and Monitoring Development experience in REST APIs Experience in managing tools – GIT/BIT Bucket, Jenkins, NPM, Docket/Kubernetes, Jira, Sonar Knowledge of Agile practices and Agile@Scale Good communication / collaboration skills

Posted 2 weeks ago

Apply

4.0 - 6.0 years

13 - 18 Lacs

Bengaluru

Work from Office

Design, development and testing of components / modules in TOP (Trade Open Platform) involving Spark, Java, Hive and related big-data technologies in a datalake architecture Contribute to the design, development and deployment of new features new components in Azure public cloud Contribute to the evolution of REST APIs in TOP enhancement, development and testing of new APIs Ensure the processes in TOP provide an optimal performance and assist in performance tuning and optimization Release Deployment Deploy using CD/CI practices and tools in various environments development, UAT and production and follow production processes. Ensure Craftsmanship practices are followed Follow Agile at Scale process in terms of participation in PI Planning and follow-up, Sprint planning, Back-log maintenance in Jira. Organize training sessions on the core platform and related technologies for the Tribe / Business line to ensure the platform evolution is continuously updated to relevant stakeholders Profile required Around 4-6 years of experience in IT industry, preferably banking domain Expertise and experience in Java (java 1.8 (building API, Java thread, collections, Streaming, dependency injection/inversion), Junit, Big-data (Spark, Oozie, Hive) and Azure (AKS, CLI, Event, Key valut) and should have been part of digital transformation initiatives with knowledge of Unix, SQL/RDBMS and Monitoring Development experience in REST APIs Experience in managing tools GIT/BIT Bucket, Jenkins, NPM, Docket/Kubernetes, Jira, Sonar Knowledge of Agile practices and Agile at Scale Good communication / collaboration skills

Posted 2 weeks ago

Apply

1.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. Focused on relationships, you are building meaningful client connections, and learning how to manage and inspire others. Navigating increasingly complex situations, you are growing your personal brand, deepening technical expertise and awareness of your strengths. You are expected to anticipate the needs of your teams and clients, and to deliver quality. Embracing increased ambiguity, you are comfortable when the path forward isn’t clear, you ask questions, and you use these moments as opportunities to grow. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Respond effectively to the diverse perspectives, needs, and feelings of others. Use a broad range of tools, methodologies and techniques to generate new ideas and solve problems. Use critical thinking to break down complex concepts. Understand the broader objectives of your project or role and how your work fits into the overall strategy. Develop a deeper understanding of the business context and how it is changing. Use reflection to develop self awareness, enhance strengths and address development areas. Interpret data to inform insights and recommendations. Uphold and reinforce professional and technical standards (e.g. refer to specific PwC tax and audit guidance), the Firm's code of conduct, and independence requirements. Responsibilities Responsible for leading software application backend design, development, delivery, and maintenance. Evaluate and select alternative technical solutions for identified requirements with knowledge of backend and J2EE application development. Work with an onshore team to clarify business requirements into product features, acting as a liaison between business and technical teams. Resolve technical issues and provide technical support. Provide technical guidance and assistance to other software engineers. Prepare staffing plan and allotment of resources. Assist the project managers in resolving any issues and conflicts within their projects. Improve customer relations by effective communication, managing expectations, and meeting commitments. Keep abreast of technical and organizational developments in your own professional field. Required Qualifications Bachelor's degree in computer science, information technology, or related area (equivalent work experience will be considered). 1+ years' experience in developing business applications in a full software development life cycle using web technologies. 1+ years' experience in Software development analysis and design (UML). Advanced experience with Node.js, ReactJS, JavaScript, TypeScript, HTML5, CSS3, SASS, Python and web service integration Have a solid technical background in J2EE, Structs, Spring, Hibernate, and MuleSoft. Experience in PostgreSQL, Microsoft SQL Server, Nginx, Docker, Redis, Spring Boot and Spring Cloud, Web Service, WebSphere/JBoss/WebLogic. Experience using at least one of the following cloud platforms: Azure, AWS, GCP. Prefer deep understanding with Azure DevOps, Azure Synapse Analytics, Databricks, Delta Lake and Lakehouse. Experience in designing, developing, and optimizing data processing applications using Apache Spark in Databricks. Capable of writing efficient Spark jobs in languages such as Scala, Python, PySpark, Spark SQL. Familiarity with the application and integration of Generative AI, Prompt Engineering and Large Language Models (LLMs) in enterprise solutions. Demonstrate the ability to independently design and implement the backend of an entire business module. Demonstrate excellent interpersonal skills, particularly in balancing requirements, managing expectations, collaborating with team members, and driving effective results. Proactive attitude, ability to work independently, and a desire to continuously learn new skills and technology. Excellent written and verbal communication skills in English. Additional Or Preferred Qualifications Master’s degree in computer science, information technology, or related majors. Technical lead experience. 3+ years’ experience in developing business applications in a full software development life cycle using web technologies. Experience using Azure and either AWS or GCP. Experience with data visualization tools such as Power BI or Tableau.

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About The Role As a Sr. Data Engineer in the Sales Automation Engineering team you should be able to work through the different areas of Data Engineering & Data Architecture including the following: Data Migration - From Hive/other DBs to Salesforce/other DBs and vice versa Data Modeling - Understand existing sources & data models and identify the gaps and building future state architecture Data Pipelines - Building Data Pipelines for several Data Mart/Data Warehouse and Reporting requirements Data Governance - Build the framework for DG & Data Quality Profiling & Reporting What The Candidate Will Need / Bonus Points ---- What the Candidate Will Do ---- Demonstrate strong knowledge of and ability to operationalize, leading data technologies and best practices. Collaborate with internal business units and data teams on business requirements, data access, processing/transformation and reporting needs and leverage existing and new tools to provide solutions. Build dimensional data models to support business requirements and reporting needs. Design, build and automate the deployment of data pipelines and applications to support reporting and data requirements. Research and recommend technologies and processes to support rapid scale and future state growth initiatives from the data front. Prioritize business needs, leadership questions, and ad-hoc requests for on-time delivery. Collaborate on architecture and technical design discussions to identify and evaluate high impact process initiatives. Work with the team to implement data governance, access control and identify and reduce security risks. Perform and participate in code reviews, peer inspections and technical design/specifications. Develop performance metrics to establish process success and work cross-functionally to consistently and accurately measure success over time Delivers measurable business process improvements while re-engineering key processes and capabilities and maps to future-state vision Prepare documentations and specifications on detailed design. Be able to work in a globally distributed team in an Agile/Scrum approach. Basic Qualifications Bachelor's Degree in computer science or similar technical field of study or equivalent practical experience. 8+ years professional software development experience, including experience in the Data Engineering & Architecture space Interact with product managers, and business stakeholders to understand data needs and help build data infrastructure that scales across the company Very strong SQL skills - know advanced level SQL coding (windows functions, CTEs, dynamic variables, Hierarchical queries, Materialized views etc) Experience with data-driven architecture and systems design knowledge of Hadoop related technologies such as HDFS, Apache Spark, Apache Flink, Hive, and Presto. Good hands on experience with Object Oriented programming languages like Python. Proven experience in large-scale distributed storage and database systems (SQL or NoSQL, e.g. HIVE, MySQL, Cassandra) and data warehousing architecture and data modeling. Working experience in cloud technologies like GCP, AWS, Azure Knowledge of reporting tools like Tableau and/or other BI tools. Preferred Qualifications Python libraries (Apache spark, Scala) Working experience in cloud technologies like GCP, AWS, Azure

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Role:-Data Engineer-Investment Exp:-6-10 Yrs Location :- Hyderabad Primary Skills :- ETL, Informatica,SQL,Python and Investment domain Please share your resumes to jyothsna.g@technogenindia.com, Job Description :- •7-9 years of experience with data analytics, data modeling, and database design. •3+ years of coding and scripting (Python, Java, Scala) and design experience. •3+ years of experience with Spark framework. •5+ Experience with ELT methodologies and tools. •5+ years mastery in designing, developing, tuning and troubleshooting SQL. •Knowledge of Informatica Power center and Informatica IDMC. •Knowledge of distributed, column- orientated technology to create high-performant database technologies like - Vertica, Snowflake. •Strong data analysis skills for extracting insights from financial data •Proficiency in reporting tools (e.g., Power BI, Tableau). T he Ideal Qualifications Technical Skills: •Domain knowledge of Investment Management operations including Security Masters, Securities Trade and Recon Operations, Reference data management, and Pricing. •Familiarity with regulatory requirements and compliance standards in the investment management industry. •Experience with IBOR’s such as Blackrock Alladin, CRD, Eagle STAR (ABOR), Eagle Pace, and Eagle DataMart. •Familiarity with investment data platforms such as GoldenSource, FINBOURNE, NeoXam, RIMES, and JPM Fusion. Soft Skills: •Strong analytical and problem-solving abilities. •Exceptional communication and interpersonal skills. •Ability to influence and motivate teams without direct authority. •Excellent time management and organizational skills, with the ability to prioritize multiple initiatives.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

Remote

Description GPP Database Link (https://cummins365.sharepoint.com/sites/CS38534/) Job Summary Leads projects for design, development and maintenance of a data and analytics platform. Effectively and efficiently process, store and make data available to analysts and other consumers. Works with key business stakeholders, IT experts and subject-matter experts to plan, design and deliver optimal analytics and data science solutions. Works on one or many product teams at a time. Key Responsibilities Designs and automates deployment of our distributed system for ingesting and transforming data from various types of sources (relational, event-based, unstructured). Designs and implements framework to continuously monitor and troubleshoot data quality and data integrity issues. Implements data governance processes and methods for managing metadata, access, retention to data for internal and external users. Designs and provide guidance on building reliable, efficient, scalable and quality data pipelines with monitoring and alert mechanisms that combine a variety of sources using ETL/ELT tools or scripting languages. Designs and implements physical data models to define the database structure. Optimizing database performance through efficient indexing and table relationships. Participates in optimizing, testing, and troubleshooting of data pipelines. Designs, develops and operates large scale data storage and processing solutions using different distributed and cloud based platforms for storing data (e.g. Data Lakes, Hadoop, Hbase, Cassandra, MongoDB, Accumulo, DynamoDB, others). Uses innovative and modern tools, techniques and architectures to partially or completely automate the most-common, repeatable and tedious data preparation and integration tasks in order to minimize manual and error-prone processes and improve productivity. Assists with renovating the data management infrastructure to drive automation in data integration and management. Ensures the timeliness and success of critical analytics initiatives by using agile development technologies such as DevOps, Scrum, Kanban Coaches and develops less experienced team members. Responsibilities Competencies: System Requirements Engineering - Uses appropriate methods and tools to translate stakeholder needs into verifiable requirements to which designs are developed; establishes acceptance criteria for the system of interest through analysis, allocation and negotiation; tracks the status of requirements throughout the system lifecycle; assesses the impact of changes to system requirements on project scope, schedule, and resources; creates and maintains information linkages to related artifacts. Collaborates - Building partnerships and working collaboratively with others to meet shared objectives. Communicates effectively - Developing and delivering multi-mode communications that convey a clear understanding of the unique needs of different audiences. Customer focus - Building strong customer relationships and delivering customer-centric solutions. Decision quality - Making good and timely decisions that keep the organization moving forward. Data Extraction - Performs data extract-transform-load (ETL) activities from variety of sources and transforms them for consumption by various downstream applications and users using appropriate tools and technologies. Programming - Creates, writes and tests computer code, test scripts, and build scripts using algorithmic analysis and design, industry standards and tools, version control, and build and test automation to meet business, technical, security, governance and compliance requirements. Quality Assurance Metrics - Applies the science of measurement to assess whether a solution meets its intended outcomes using the IT Operating Model (ITOM), including the SDLC standards, tools, metrics and key performance indicators, to deliver a quality product. Solution Documentation - Documents information and solution based on knowledge gained as part of product development activities; communicates to stakeholders with the goal of enabling improved productivity and effective knowledge transfer to others who were not originally part of the initial learning. Solution Validation Testing - Validates a configuration item change or solution using the Function's defined best practices, including the Systems Development Life Cycle (SDLC) standards, tools and metrics, to ensure that it works as designed and meets customer requirements. Data Quality - Identifies, understands and corrects flaws in data that supports effective information governance across operational business processes and decision making. Problem Solving - Solves problems and may mentor others on effective problem solving by using a systematic analysis process by leveraging industry standard methodologies to create problem traceability and protect the customer; determines the assignable cause; implements robust, data-based solutions; identifies the systemic root causes and ensures actions to prevent problem reoccurrence are implemented. Values differences - Recognizing the value that different perspectives and cultures bring to an organization. Education, Licenses, Certifications College, university, or equivalent degree in relevant technical discipline, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Experience Intermediate experience in a relevant discipline area is required. Knowledge of the latest technologies and trends in data engineering are highly preferred and includes: Familiarity analyzing complex business systems, industry requirements, and/or data regulations Background in processing and managing large data sets Design and development for a Big Data platform using open source and third-party tools SPARK, Scala/Java, Map-Reduce, Hive, Hbase, and Kafka or equivalent college coursework SQL query language Clustered compute cloud-based implementation experience Experience developing applications requiring large file movement for a Cloud-based environment and other data extraction tools and methods from a variety of sources Experience in building analytical solutions Intermediate Experiences In The Following Are Preferred Experience with IoT technology Experience in Agile software development Qualifications Strong programming skills in SQL, Python and PySpark for data processing and automation. Experience with Databricks and Snowflake (preferred) for building and maintaining data pipelines. Understanding of Machine Learning and AI techniques, especially for data quality and anomaly detection. Experience with cloud platforms such as Azure and AWS and familiarity with Azure Web Apps Knowledge of Data Quality and Data Governance concepts (Preferred) Nice to have: Power BI dashboard development experience. Job Systems/Information Technology Organization Cummins Inc. Role Category Remote Job Type Exempt - Experienced ReqID 2417177 Relocation Package No

Posted 2 weeks ago

Apply

5.0 - 9.0 years

16 - 22 Lacs

Bengaluru

Hybrid

We do have the opening for the role of Big Data Developer with an MNC. Mandatory Skill CRM Web UI Framework including Component Workbench Hands-On experience in BOL-GENIL programming Knowledge on 1-Order Framework including APIs Involved in an SAP CRM EHP upgrade ABAP Objects, Workflows, BAPIs, BADIs, Report programming Experience: 5- 10 years Location: Bangalore (Whitefiled) Notice Period: 0- 30 Days Work Mode: hybrid (3 Days work from office)

Posted 2 weeks ago

Apply

2.0 - 5.0 years

3 - 3 Lacs

Pune

Remote

We are seeking a highly skilled Analyst -Big Data Developer - to join our dynamic team. The ideal candidate will have extensive experience with big data technologies and a strong background in developing and optimizing data integration frameworks and applications. You will be responsible for designing, implementing, and maintaining robust data solutions in a cloud environment. Required Skills and Qualifications : Education bachelor's degree in engineering, Computer Science, or a related field, or equivalent qualification. Experience Minimum of 2 to 5 years of experience in a recognized global IT services or consulting company, with hands-on expertise in big data technologies. Big Data Technologies Over 2 years of experience with Hadoop ecosystem, Apache Spark, and associated tools. Experience with modern big data technologies and frameworks such as Spark, Impala, and Kafka. Programming Proficiency in Java, Scala, and Python with the ability to code in multiple languages. Cloud Platforms Experience with cloud platforms, preferably GCP. Linux Environment At least 2 years of experience working in a Linux environment, including system tools, scripting languages, and integration frameworks. Schema Design Extensive experience applying schema design principles and best practices to big data technologies. Hadoop Distributions Knowledge of Hadoop distributions such as EMR, Cloudera, or Hortonworks. Preferred Skills: Experience with additional big data tools and technologies. Certification in relevant big data or cloud technologies. Big Data Technologies Over 2 years of experience with Hadoop ecosystem, Apache Spark, and associated tools

Posted 2 weeks ago

Apply

2.0 years

0 Lacs

Delhi

On-site

Job Description Job Description About the job Help our clients-internal and external-understand and use RMS services better by understanding their requirements, queries, and helping address the same through knowledge of data science and RMS. Responsibilities Building knowledge of Nielsen suite of products and demonstrating the same Understanding client concerns Able to put forth ways and means of solving client concerns with supervision Automation and development of solutions for existing processes Taking initiative to understand concerns/problems in the RMS product and participating in product improvement initiatives About the job Help our clients-internal and external-understand and use RMS services better by understanding their requirements, queries, and helping address the same through knowledge of data science and RMS. Responsibilities Building knowledge of Nielsen suite of products and demonstrating the same Understanding client concerns Able to put forth ways and means of solving client concerns with supervision Automation and development of solutions for existing processes Taking initiative to understand concerns/problems in the RMS product and participating in product improvement initiatives Qualifications Professionals with degrees in Maths, Data Science, Statistics, or related fields involving statistical analysis of large data sets 2-3 years of experience in market research or relevant field Mindset and Approach to work: Embraces change, innovation and iterative processes in order to continuously improve the products value to clients. Continuously collaborate & support to improve the product. Active interest in arriving at collaboration and consensus in communication plans, deliverables and deadlines Plans and completes assignments independently within an established framework, breaking down complex tasks, making reasonable decisions. Work is reviewed for overall technical soundness. Participates in data experiments and PoCs, setting measurable goals, timelines and reproducible outcomes. Applies critical thinking and takes initiative. Continuously reviews the latest industry innovations and effectively applies them to their work Consistently challenges and analyzes data to ensure accuracy. Functional Skills: Ability to manipulate, analyze and interpret large data sources Experienced in high-level programming languages (f.e. Python, R, SQL, Scala), as well as with data visualization tools (e.g. Power BI, Spotfire, Tableau, MicroStrategy) Able to work in virtual environment. Familiar with git/Bitbucket processes People with at least some experience in RMS, NIQ, will have an advantage Can use a logical reasoning process to break down and work through increasingly challenging situations or problems to arrive at positive outcomes. Identify and use data from various sources to influence decisions Interpret effectively the data in relation to business objectives Soft Skills Ability to engage/communicate with team and extended team members Can adapt to change and new ideas or ways of working. Exhibits emotional intelligence when partnering with internal and external stakeholders Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion

Posted 2 weeks ago

Apply

0.0 years

0 Lacs

Bengaluru

On-site

Data Engineer -1 (Experience – 0-2 years) What we offer Our mission is simple – Building trust. Our customer's trust in us is not merely about the safety of their assets but also about how dependable our digital offerings are. That’s why, we at Kotak Group are dedicated to transforming banking by imbibing a technology-first approach in everything we do, with an aim to enhance customer experience by providing superior banking services. We welcome and invite the best technological minds in the country to come join us in our mission to make banking seamless and swift. Here, we promise you meaningful work that positively impacts the lives of many. About our team DEX is a central data org for Kotak Bank which manages entire data experience of Kotak Bank. DEX stands for Kotak’s Data Exchange. This org comprises of Data Platform, Data Engineering and Data Governance charter. The org sits closely with Analytics org. DEX is primarily working on greenfield project to revamp entire data platform which is on premise solutions to scalable AWS cloud-based platform. The team is being built ground up which provides great opportunities to technology fellows to build things from scratch and build one of the best-in-class data lake house solutions. The primary skills this team should encompass are Software development skills preferably Python for platform building on AWS; Data engineering Spark (pyspark, sparksql, scala) for ETL development, Advanced SQL and Data modelling for Analytics. The org size is expected to be around 100+ member team primarily based out of Bangalore comprising of ~10 sub teams independently driving their charter. As a member of this team, you get opportunity to learn fintech space which is most sought-after domain in current world, be a early member in digital transformation journey of Kotak, learn and leverage technology to build complex data data platform solutions including, real time, micro batch, batch and analytics solutions in a programmatic way and also be futuristic to build systems which can be operated by machines using AI technologies. The data platform org is divided into 3 key verticals: Data Platform This Vertical is responsible for building data platform which includes optimized storage for entire bank and building centralized data lake, managed compute and orchestrations framework including concepts of serverless data solutions, managing central data warehouse for extremely high concurrency use cases, building connectors for different sources, building customer feature repository, build cost optimization solutions like EMR optimizers, perform automations and build observability capabilities for Kotak’s data platform. The team will also be center for Data Engineering excellence driving trainings and knowledge sharing sessions with large data consumer base within Kotak. Data Engineering This team will own data pipelines for thousands of datasets, be skilled to source data from 100+ source systems and enable data consumptions for 30+ data analytics products. The team will learn and built data models in a config based and programmatic and think big to build one of the most leveraged data model for financial orgs. This team will also enable centralized reporting for Kotak Bank which cuts across multiple products and dimensions. Additionally, the data build by this team will be consumed by 20K + branch consumers, RMs, Branch Managers and all analytics usecases. Data Governance The team will be central data governance team for Kotak bank managing Metadata platforms, Data Privacy, Data Security, Data Stewardship and Data Quality platform. If you’ve right data skills and are ready for building data lake solutions from scratch for high concurrency systems involving multiple systems then this is the team for you. You day to day role will include Drive business decisions with technical input and lead the team. Design, implement, and support an data infrastructure from scratch. Manage AWS resources, including EC2, EMR, S3, Glue, Redshift, and MWAA. Extract, transform, and load data from various sources using SQL and AWS big data technologies. Explore and learn the latest AWS technologies to enhance capabilities and efficiency. Collaborate with data scientists and BI engineers to adopt best practices in reporting and analysis. Improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Build data platforms, data pipelines, or data management and governance tools. BASIC QUALIFICATIONS for Data Engineer/ SDE in Data Bachelor's degree in Computer Science, Engineering, or a related field Experience in data engineering Strong understanding of AWS technologies, including S3, Redshift, Glue, and EMR Experience with data pipeline tools such as Airflow and Spark Experience with data modeling and data quality best practices Excellent problem-solving and analytical skills Strong communication and teamwork skills Experience in at least one modern scripting or programming language, such as Python, Java, or Scala Strong advanced SQL skills PREFERRED QUALIFICATIONS AWS cloud technologies: Redshift, S3, Glue, EMR, Kinesis, Firehose, Lambda, IAM, Airflow Prior experience in Indian Banking segment and/or Fintech is desired. Experience with Non-relational databases and data stores Building and operating highly available, distributed data processing systems for large datasets Professional software engineering and best practices for the full software development life cycle Designing, developing, and implementing different types of data warehousing layers Leading the design, implementation, and successful delivery of large-scale, critical, or complex data solutions Building scalable data infrastructure and understanding distributed systems concepts SQL, ETL, and data modelling Ensuring the accuracy and availability of data to customers Proficient in at least one scripting or programming language for handling large volume data processing Strong presentation and communications skills.

Posted 2 weeks ago

Apply

3.0 - 5.0 years

8 - 9 Lacs

Bengaluru

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Scala Developer - Senior EY’s GDS Tax Technology team’s mission is to develop, implement and integrate technology solutions that better serve our clients and engagement teams. As a member of EY’s core Tax practice, you’ll develop a deep tax technical knowledge and outstanding database, data analytics and programming skills. Ever-increasing regulations require tax departments to gather, organize and analyse more data than ever before. Often the data necessary to satisfy these ever-increasing and complex regulations must be collected from a variety of systems and departments throughout an organization. Effectively and efficiently handling the variety and volume of data is often extremely challenging and time consuming for a company. EY's GDS Tax Technology team members work side-by-side with the firm's partners, clients and tax technical subject matter experts to develop and incorporate technology solutions that enhance value-add, improve efficiencies and enable our clients with disruptive and market leading tools supporting Tax. GDS Tax Technology works closely with clients and professionals in the following areas: Federal Business Tax Services, Partnership Compliance, Corporate Compliance, Indirect Tax Services, Human Capital, and Internal Tax Services. GDS Tax Technology provides solution architecture, application development, testing and maintenance support to the global TAX service line both on a pro-active basis and in response to specific requests. The opportunity We’re looking for a Tax Senior with expertise in Scala Developer (Senior) to join the TTT team in Tax SL. This is a fantastic opportunity to be part of a pioneer firm whilst being instrumental in the growth of a new service offering. Your key responsibilities Work Experience – 3 to 5 years of hands-on development experience specifically in Scala (Spark). Development experience with RDDs, writing code for performing actions, transformations using in-memory processing using Scala. Development experience in data frames and data sets and preparing notebooks in Scala for running jobs in spark. Experience with optimizing existing code for better performance and efficiency. Exposure on the database side (understanding of read / write queries, handling data volume) and basic understanding of NoSQL databases like Cassandra and Astra. Understanding of distributed computing and related technologies (Databricks). Hands on experience with development tools like IntelliJ. Knowledge on working with high data volume projects (reading and writing up to a million records per transaction). Basic debugging skills and Information Security knowledge. Be able to perform developer testing for the components written / modified by self. Be able to perform Performance and load testing from a development perspective. Be able to prepare development documents such as design notes, development test cases, WBS (work break down structure) and effort estimation. Knowledge and exposure to GitHub, Azure DevOps, code maintenance and CI/CD release processes. Exposure to Software development life cycle and agile methodologies. Degree in software engineering, computer science or similar. Good communication skills (verbal and written). Responsibilities Developing and maintaining software applications using the Scala. Writing clean, efficient, and reusable code with Scala best practices. Working on integrating third-party libraries and APIs with Scala. Work with the dev team closely and provide guidance to improve their skillset and help them complete the deliverables within the planned time frame. Review the code written by the team members and suggest changes in terms of coding standards, best practices, performance optimization and security considerations. Implementing test-driven development and automated testing for Scala applications Provide multiple alternatives for resolving a problem explaining the pros and cons of each approach. Ability to have an overall understanding of the code to be able to connect to topics discussed during design, development or issue debugging sessions. Work on code optimization activities periodically and ensure quality of work delivered. Participate meaningfully in design and architecture sessions, requirements understanding meetings by asking questions, confirming understanding, and summarizing the discussion. Send daily / weekly status reports and summaries of the work completed for the day / week. Participate in scrum calls and other technical discussion calls with the team. Train and provide guidance for team members on technologies and development concepts. Work with the team on providing estimates, creating WBS (work breakdown structure) for the development tasks assigned. Staying updated with the latest Scala developments and framework. Be self - organizing and plan the day based on the priorities communicated. What we offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning : You’ll develop the mindset and skills to navigate whatever comes next. Success, as defined by you : We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 2 weeks ago

Apply

2.0 - 4.0 years

4 - 8 Lacs

Hyderabad

Work from Office

Responsibilities : ETL Development The CDP ETL & Database Engineer will be responsible for building pipelines to feed downstream data processes. They will be able to analyze data, interpret business requirements, and establish relationships between data sets. The ideal candidate will be familiar with different encoding formats and file layouts such as JSON and XML. I mplementations & Onboarding Will work with the team to onboard new clients onto the ZMP/CDP+ platform. The candidate will solidify business requirements, perform ETL file validation, establish users, perform complex aggregations, and syndicate data across platforms. The hands-on engineer will take a test-driven approach towards development and will be able to document processes and workflows. Incremental Change Requests The CDP ETL & Database Engineer will be responsible for analyzing change requests and determining the best approach towards implementation and execution of the request. This requires the engineer to have a deep understanding of the platform's overall architecture. Change requests will be implemented and tested in a development environment to ensure their introduction will not negatively impact downstream processes. Change Data Management The candidate will adhere to change data management procedures and actively participate in CAB meetings where change requests will be presented and approved. Prior to introducing change, the engineer will ensure that processes are running in a development environment. The engineer will be asked to do peer-to-peer code reviews and solution reviews before production code deployment. Collaboration & Process Improvement The engineer will be asked to participate in knowledge share sessions where they will engage with peers, discuss solutions, best practices, overall approach, and process. The candidate will be able to look for opportunities to streamline processes with an eye towards building a repeatable model to reduce implementation duration. Job Requirements : The CDP ETL & Database Engineer will be well versed in the following areas: Relational data modeling ETL and FTP concepts Advanced Analytics using SQL Functions Cloud technologies - AWS, Snowflake Able to decipher requirements, provide recommendations, and implement solutions within predefined timeframes. The ability to work independently, but at the same time, the individual will be called upon to contribute in a team setting. The engineer will be able to confidently communicate status, raise exceptions, and voice concerns to their direct manager. Participate in internal client project status meetings with the Solution/Delivery management teams. When required, collaborate with the Business Solutions Analyst (BSA) to solidify requirements. Ability to work in a fast paced, agile environment; the individual will be able to work with a sense of urgency when escalated issues arise. Strong communication and interpersonal skills, ability to multitask and prioritize workload based on client demand. Familiarity with Jira for workflow mgmt., and time allocation. Familiarity with Scrum framework, backlog, planning, sprints, story points, retrospectives etc. Required Skills : ETL ETL tools such as Talend (Preferred, not required) DMExpress Nice to have Informatica Nice to have Database - Hands on experience with the following database Technologies Snowflake (Required) MYSQL/PostgreSQL Nice to have Familiar with NOSQL DB methodologies (Nice to have) Programming Languages Can demonstrate knowledge of any of the following. PLSQL JavaScript Strong Plus Python - Strong Plus Scala - Nice to have AWS Knowledge of the following AWS services: S3 EMR (Concepts) EC2 (Concepts) Systems Manager / Parameter Store Understands JSON Data structures, key value pair. Working knowledge of Code Repositories such as GIT, Win CVS, SVN. Workflow management tools such as Apache Airflow, Kafka, Automic/Appworx Jira Minimum Qualifications Bachelor's degree or equivalent 2-4 Years' experience Excellent verbal & written communications skills Self-Starter, highly motivated Analytical mindset

Posted 2 weeks ago

Apply

4.0 years

6 - 10 Lacs

Bengaluru

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Azure Data Engineer + Power BI Senior– Consulting As part of our GDS Consulting team, you will be part of NCLC team delivering specific to Microsoft account. You will be working on latest Microsoft BI technologies and will collaborate with other teams within Consulting services. The opportunity We’re looking for resources with expertise in Microsoft BI, Power BI, Azure Data Factory, Data Bricks to join the group of our Data Insights team. This is a fantastic opportunity to be part of a leading firm whilst being instrumental in the growth of our service offering. Your key responsibilities Responsible for managing multiple client engagements. Understand and analyse business requirements by working with various stakeholders and create the appropriate information architecture, taxonomy and solution approach Work independently to gather requirements, cleansing extraction and loading of data Translate business and analyst requirements into technical code Create interactive and insightful dashboards and reports using Power BI, connecting to various data sources and implementing DAX calculations. Design and build complete ETL/Azure Data Factory processes moving and transforming data for ODS, Staging, and Data Warehousing Design and development of solutions in Data Bricks, Scala, Spark, SQL to process and analyze large datasets, perform data transformations, and build data models. Design SQL Schema, Database Schema, Stored procedures, function, and T-SQL queries. Skills and attributes for success Collaborating with other members of the engagement team to plan the engagement and develop work program timelines, risk assessments and other documents/templates. Able to manage Senior stakeholders. Experience in leading teams to execute high quality deliverables within stipulated timeline. Skills in PowerBI, Azure Data Factory, Databricks, Azure Synapse, Data Modelling, DAX, Power Query, Microsoft Fabric Strong proficiency in Power BI, including data modelling, DAX, and creating interactive visualizations. Solid experience with Azure Databricks, including working with Spark, PySpark (or Scala), and optimizing big data processing. Good understanding of various Azure services relevant to data engineering, such as Azure Blob Storage, ADLS Gen2, Azure SQL Database/Synapse Analytics Strong SQL Skills and experience with of one of the following: Oracle, SQL, Azure SQL. Good to have experience in SSAS or Azure SSAS and Agile Project Management. Basic Knowledge on Azure Machine Learning services. Excellent Written and Communication Skills and ability to deliver technical demonstrations Quick learner with “can do” attitude Demonstrating and applying strong project management skills, inspiring teamwork and responsibility with engagement team members To qualify for the role, you must have A bachelor's or master's degree A minimum of 4-7 years of experience, preferably background in a professional services firm. Excellent communication skills with consulting experience preferred Ideally, you’ll also have Analytical ability to manage multiple projects and prioritize tasks into manageable work products. Can operate independently or with minimum supervision What working at EY offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 2 weeks ago

Apply

3.0 years

8 - 10 Lacs

Chennai

On-site

Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrow—people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description: This position participates in the support of batch and real-time data pipelines utilizing various data analytics processing frameworks in support of data science practices for Marketing and Finance business units. This position supports the integration of data from various data sources, as well as performs extract, transform, load (ETL) data conversions, and facilitates data cleansing and enrichment. This position performs full systems life cycle management activities, such as analysis, technical requirements, design, coding, testing, implementation of systems and applications software. This position participates and contributes to synthesizing disparate data sources to support reusable and reproducible data assets. Responsibilities Supervises and supports data engineering projects and builds solutions by leveraging a strong foundational knowledge in software/application development. Develops and delivers data engineering documentation. Gathers requirements, defines the scope, and performs the integration of data for data engineering projects. Recommends analytic reporting products/tools and supports the adoption of emerging technology. Performs data engineering maintenance and support. Provides the implementation strategy and executes backup, recovery, and technology solutions to perform analysis. Performs ETL tool capabilities with the ability to pull data from various sources and perform a load of the transformed data into a database or business intelligence platform. Required Qualifications Codes using programming language used for statistical analysis and modeling such as Python/Java/Scala/C# Strong understanding of database systems and data warehousing solutions. Strong understanding of the data interconnections between organizations’ operational and business functions. Strong understanding of the data life cycle stages - data collection, transformation, analysis, storing the data securely, providing data accessibility Strong understanding of the data environment to ensure that it can scale for the following demands: Throughput of data, increasing data pipeline throughput, analyzing large amounts of data, Real-time predictions, insights and customer feedback, data security, data regulations, and compliance. Strong knowledge of data structures, as well as data filtering and data optimization. Strong understanding of analytic reporting technologies and environments (e.g., PBI, Looker, Qlik, etc.) Strong understanding of a cloud services platform (e.g., GCP, or AZURE, or AWS) and all the data life cycle stages. Azure Preferred. Understanding of distributed systems and the underlying business problem being addressed, as well as guides team members on how their work will assist by performing data analysis and presenting findings to the stakeholders. Bachelor’s degree in MIS, mathematics, statistics, or computer science, international equivalent, or equivalent job experience. Required Skills 3 years of experience with Databricks Other required experience includes: SSIS/SSAS, Apache Spark, Python, R and SQL, SQL Server Preferred Skills Scala, DeltaLake Unity Catalog, Azure Logic Apps, Cloud Services Platform (e.g., GCP, or AZURE, or AWS) Employee Type: Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation.

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrow—people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description This position participates in the support of batch and real-time data pipelines utilizing various data analytics processing frameworks in support of data science practices for Marketing and Finance business units. This position supports the integration of data from various data sources, as well as performs extract, transform, load (ETL) data conversions, and facilitates data cleansing and enrichment. This position performs full systems life cycle management activities, such as analysis, technical requirements, design, coding, testing, implementation of systems and applications software. This position participates and contributes to synthesizing disparate data sources to support reusable and reproducible data assets. Responsibilities Supervises and supports data engineering projects and builds solutions by leveraging a strong foundational knowledge in software/application development. Develops and delivers data engineering documentation. Gathers requirements, defines the scope, and performs the integration of data for data engineering projects. Recommends analytic reporting products/tools and supports the adoption of emerging technology. Performs data engineering maintenance and support. Provides the implementation strategy and executes backup, recovery, and technology solutions to perform analysis. Performs ETL tool capabilities with the ability to pull data from various sources and perform a load of the transformed data into a database or business intelligence platform. Required Qualifications Codes using programming language used for statistical analysis and modeling such as Python/Java/Scala/C# Strong understanding of database systems and data warehousing solutions. Strong understanding of the data interconnections between organizations’ operational and business functions. Strong understanding of the data life cycle stages - data collection, transformation, analysis, storing the data securely, providing data accessibility Strong understanding of the data environment to ensure that it can scale for the following demands: Throughput of data, increasing data pipeline throughput, analyzing large amounts of data, Real-time predictions, insights and customer feedback, data security, data regulations, and compliance. Strong knowledge of data structures, as well as data filtering and data optimization. Strong understanding of analytic reporting technologies and environments (e.g., PBI, Looker, Qlik, etc.) Strong understanding of a cloud services platform (e.g., GCP, or AZURE, or AWS) and all the data life cycle stages. Azure Preferred. Understanding of distributed systems and the underlying business problem being addressed, as well as guides team members on how their work will assist by performing data analysis and presenting findings to the stakeholders. Bachelor’s degree in MIS, mathematics, statistics, or computer science, international equivalent, or equivalent job experience. Required Skills 3 years of experience with Databricks Other required experience includes: SSIS/SSAS, Apache Spark, Python, R and SQL, SQL Server Preferred Skills Scala, DeltaLake Unity Catalog, Azure Logic Apps, Cloud Services Platform (e.g., GCP, or AZURE, or AWS) Employee Type Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation.

Posted 2 weeks ago

Apply

0 years

6 - 7 Lacs

Indore

Remote

Cloud Platform: Amazon Web Services (AWS) – The backbone providing robust, scalable, and secure infrastructure. Ingestion Layer (Data Ingestion Frameworks): o Apache Nifi: For efficient, real-time data routing, transformation, and mediation from diverse sources. o Data Virtuality: Facilitates complex ETL and data virtualization, creating a unified view of disparate data. Data Frameworks (Data Processing & Microservices): o Rules Engine & Eductor (In-house Tools - Scala, Python): Our proprietary microservices for specialized data handling and business logic automation. o Kafka: Our high-throughput, fault-tolerant backbone for real-time data streaming and event processing. Analytics Layer (Analytics Services & Compute): o Altair: For powerful data visualization and interactive analytics. o Apache Zeppelin: Our interactive notebook for collaborative data exploration and analysis. o Apache Spark: Our unified analytics engine for large-scale data processing and machine learning workloads. Data Presentation Layer (Client-facing & APIs): o Client Services (React, TypeScript): For dynamic, responsive, and type-safe user interfaces. o Client APIs (Node.js, Nest.js): For high-performance, scalable backend services. Access Layer: o API Gateway (Amazon API Gateway): Manages all external API access, ensuring security, throttling, and routing. o AWS VPN (Clients, Site-to-Site, OpenVPN): Secure network connectivity. o Endpoints & Service Access (S3, Lambda): Controlled access to core AWS services. o DaaS (Data-as-a-Service - Dremio, Data Virtuality, PowerBI): Empowering self-service data access and insights. Security Layer: o Firewall (AWS WAF): Protects web applications from common exploits. o IdM, IAM (Keycloak, AWS Cognito): Robust identity and access management. o Security Groups & Policy (AWS): Network-level security and granular access control. o ACLs (Access Control Lists - AWS): Fine-grained control over network traffic. o VPCs (Virtual Private Clouds - AWS): Isolated and secure network environments. Data Layer (Databases & Storage): o OpenSearch Services: For powerful search, analytics, and operational data visualization. o Data Warehouse – AWS Redshift: Our primary analytical data store. o Databases (PostgreSQL, MySQL, OpenSearch): Robust relational and search-optimized databases. o Storage (S3 Object Storage, EBS, EFS): Highly scalable, durable, and cost-effective storage solutions. Compute & Orchestration: o EKS (Amazon Elastic Kubernetes Services): Manages our containerized applications, providing high availability and scalability for microservices. Job Types: Full-time, Contractual / Temporary Contract length: 6 months Pay: ₹50,000.00 - ₹60,000.00 per month Schedule: Monday to Friday Weekend availability Work Location: Remote

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies