Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The Applications Development Technology Lead Analyst is a senior level position responsible for establishing and implementing new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to lead applications systems analysis and programming activities. At least two years (Over all 10+) of experience building and leading highly complex, technical engineering teams. Lead software engineering team, from sourcing to closing. Drive strategic vision for the team and product Responsibilities: Partner with multiple management teams to ensure appropriate integration of functions to meet goals as well as identify and define necessary system enhancements to deploy new products and process improvements Experience managing an data focused product,ML platform and or UI/UX Hands on experience relevant experience in HTML, CSS Java, Spring boot, Oracle, NoSQL OR Design, develop, and optimize scalable distributed data processing pipelines using Apache Spark and Scala. Resolve variety of high impact problems/projects through in-depth evaluation of complex business processes, system processes, and industry standards Provide expertise in area and advanced knowledge of applications programming and ensure application design adheres to the overall architecture blueprint Utilize advanced knowledge of system flow and develop standards for coding, testing, debugging, and implementation Experience managing, hiring and coaching software engineering teams. Experience with large-scale distributed web services and the processes around testing, monitoring, and SLAs to ensure high product quality. Provide in-depth analysis with interpretive thinking to define issues and develop innovative solutions Serve as advisor or coach to mid-level developers and analysts, allocating work as necessary Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Qualifications: 6-10 years of relevant experience in Apps Development or systems analysis role Extensive experience system analysis and in programming of software applications Experience in managing and implementing successful projects Subject Matter Expert (SME) in at least one area of Applications Development Ability to adjust priorities quickly as circumstances dictate Demonstrated leadership and project management skills Consistently demonstrates clear and concise written and verbal communication Education: Bachelor’s degree/University degree or equivalent experience Master’s degree preferred This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less
Posted 1 week ago
7.0 - 11.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The Applications Development Senior Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities. Responsibilities: Conduct tasks related to feasibility studies, time and cost estimates, IT planning, risk technology, applications development, model development, and establish and implement new or revised applications systems and programs to meet specific business needs or user areas. Monitor and control all phases of development process and analysis, design, construction, testing, and implementation as well as provide user and operational support on applications to business users. Utilize in-depth specialty knowledge of applications development to analyze complex problems/issues, provide evaluation of business process, system process, and industry standards, and make evaluative judgement. Create high-level technical/process documentation and presentations for audiences at various levels. Recommend and develop security measures in post implementation analysis of business usage to ensure successful system design and functionality. Consult with users/clients and other technology groups on issues, recommend advanced programming solutions, and install and aid customer exposure systems. Ensure essential procedures are followed and help define operating standards and processes. Well versed in Agile Development Life Cycle and acts as SME to senior stakeholders and /or other team members. Keep abreast of latest technological happenings in his work area and bring relevant ideas/concept to the table. Appropriately assess risk when business decisions are made, demonstrating consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Qualifications: 7-11 years of relevant experience Experience in systems analysis and programming of software applications written in C#, ASP.NET, .NET, Microservices, Angular, Oracle / SQL Server and messaging platform like Kafka. Strong OOP fundamentals, programming languages, and web frameworks, SOLID architecture, design patterns, microservices and eco-system. Working knowledge of Containerization using Docker/Kubernetes, OpenShift, cloud computing and deployment strategies using virtual environments, source code control systems, unit test framework, build and deployment tools. Hands on working experience and ability to motivate and lead the team. Experience in managing and implementing successful projects and working knowledge of consulting/project management techniques/methods. Must be able to work independently as well as in a team environment and adapt to a rapidly changing environment. Skills: Expertise in C#, ASP.NET, .NET, Microservices, Angular, Oracle and SQL Server Solid understanding of SOA concepts, Micro services & RESTful API design. Ability to prioritize and manage schedules under tight, fixed deadlines. Ability to produce professional, technically sound, and visually appealing presentations and architecture designs. Knowledge on document management solution will be a plus Strong writing, communication, time-management, decision-making, and basic task organization skills Experience with CI/CD build pipelines and toolchain – Git, BitBucket, TeamCity, Artifactory, Jira Experience. Should be well versed with solutions with latest tech advancements. Good knowledge on standard SDLC and Agile processes. Knowledge on usage of AI tools like copilot, basics of Unix/Linux and other programming languages like Python, Kotlin, scala, shell scripting etc. is good to have. knowledge on webserver setup and configuration with reverse proxy/ssl setup etc. (preferred nginx webserver) is a plus. Education: Bachelor’s degree/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less
Posted 1 week ago
9.0 - 12.0 years
0 - 3 Lacs
Chennai, Bengaluru, Mumbai (All Areas)
Hybrid
Required Skills & Experience: Immediate joiners with 7+ years of experience in data engineering with a strong background in Big Data technologies. 6+ years of hands-on experience with Apache Spark and Scala. Experience in distributed computing and parallel processing. Solid understanding of Hadoop ecosystem, Hive, HDFS, etc. Proficient in writing efficient and reusable code in Scala. Strong knowledge of SQL and data modeling. Experience with cloud platforms like AWS/GCP/Azure is a plus. Familiarity with data orchestration tools (e.g., Airflow, Oozie) is desirable. Strong problem-solving and analytical skills. Excellent communication and team leadership abilities.
Posted 1 week ago
5.0 - 8.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Job Information Job Opening ID ZR_1634_JOB Date Opened 12/12/2022 Industry Technology Job Type Work Experience 5-8 years Job Title AWS-BIGDATA-DEVELOPER City Bangalore Province Karnataka Country India Postal Code 560001 Number of Positions 4 Roles and Responsibilities: Experience in GLUE AWS Experience with one or more of the followingSpark, Scala, Python, and/or R . Experience in API development with NodeJS Experience with AWS (S3, EC2) or other cloud provider Experience in Data Virtualization tools like Dremio and Athena is a plus Should be technically proficient in Big Data concepts Should be technically proficient in Hadoop and noSQL (MongoDB) Good communication and documentation skills check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
8.0 - 12.0 years
4 - 8 Lacs
Pune
Work from Office
Job Information Job Opening ID ZR_1581_JOB Date Opened 25/11/2022 Industry Technology Job Type Work Experience 8-12 years Job Title Senior Specialist- Data Engineer City Pune Province Maharashtra Country India Postal Code 411001 Number of Positions 4 Location:Pune/ Mumbai/ Bangalore/ Chennai Roles & Responsibilities: Total 8-10 years of working experience Experience/Needs 8-10 Years of experience with big data tools like Spark, Kafka, Hadoop etc. Design and deliver consumer-centric high performant systems. You would be dealing with huge volumes of data sets arriving through batch and streaming platforms. You will be responsible to build and deliver data pipelines that process, transform, integrate and enrich data to meet various demands from business Mentor team on infrastructural, networking, data migration, monitoring and troubleshooting aspects Focus on automation using Infrastructure as a Code (IaaC), Jenkins, devOps etc. Design, build, test and deploy streaming pipelines for data processing in real time and at scale Experience with stream-processing systems like Storm, Spark-Streaming, Flink etc.. Experience with object-oriented/object function scripting languagesScala, Java, etc. Develop software systems using test driven development employing CI/CD practices Partner with other engineers and team members to develop software that meets business needs Follow Agile methodology for software development and technical documentation Good to have banking/finance domain knowledge Strong written and oral communication, presentation and interpersonal skills. Exceptional analytical, conceptual, and problem-solving abilities Able to prioritize and execute tasks in a high-pressure environment Experience working in a team-oriented, collaborative environment 8-10 years of hand on coding experience Proficient in Java, with a good knowledge of its ecosystems Experience with writing Spark code using scala language Experience with BigData tools like Sqoop, Hive, Pig, Hue Solid understanding of object-oriented programming and HDFS concepts Familiar with various design and architectural patterns Experience with big data toolsHadoop, Spark, Kafka, fink, Hive, Sqoop etc. Experience with relational SQL and NoSQL databases like MySQL, PostgreSQL, Mongo dB and Cassandra Experience with data pipeline tools like Airflow, etc. Experience with AWS cloud servicesEC2, S3, EMR, RDS, Redshift, BigQuery Experience with stream-processing systemsStorm, Spark-Streaming, Flink etc. Experience with object-oriented/object function scripting languagesPython, Java, Scala, etc. Expertise in design / developing platform components like caching, messaging, event processing, automation, transformation and tooling frameworks check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
6.0 - 10.0 years
3 - 8 Lacs
Pune
Work from Office
Job Information Job Opening ID ZR_1671_JOB Date Opened 20/12/2022 Industry Technology Job Type Work Experience 6-10 years Job Title Oracle Warehouse Builder/Developer City Pune Province Maharashtra Country India Postal Code 411001 Number of Positions 4 Roles & Responsibilities: Oracle Warehouse Builder, OWB, Oracle Workflow Builder, Oracle TBSS Oracle Warehouse Builder 9i (Client Version 9.0.2.62.3/Repository Version 9.0.2.0.0) Oracle Warehouse Builder 4 Oracle Workflow Builder 2.6.2 Oracle Database 10gTNS for IBM/AIX RISC System/6000Version 10.2.0.5.0 - Production More than 5 years experience on Oracle Warehouse Builder (OWB) and Oracle Workflow Builder Expert Knowledge on Oracle PL/SQL to develop individual code objects to entire DataMarts. Scheduling tools Oracle TBSS (DBMS_SCHEDULER jobs to create and run) and trigger based for file sources based on control files. Must have design and development experience in data pipeline solutions from different source systems (FILES, Oracle) to data lakes. Must have involved in creating/designing Hive tables and loading analyzing data using hive queries. Must have knowledge in CA Workload Automation DE 12.2 to create jobs and scheduling. Extensive knowledge on entire life cycle of Change/Incident/Problem management by using ServiceNow. Oracle Warehouse Builder 9i (Client Version 9.0.2.62.3/Repository Version 9.0.2.0.0). Oracle Warehouse Builder 4 Oracle Workflow Builder 2.6.2 Oracle Database 10gTNS for IBM/AIX RISC System/6000Version 10.2.0.5.0 - Production. Oracle Enterprise Manager 10gR1.(Monitoring jobs and tablespaces utilization) Extensive knowledge in fetching Mainframe Cobol files(ASCII AND EBSDIC formats) to the landing area and processing(formatting) and loading(Error handling) of these files to oracle tables by using SQL*Loader and External tables. Extensive knowledge in Oracle Forms 6 to integrate with OWB 4. Extensive knowledge on entire life cycle of Change/Incident/Problem management by using Service-Now. work closely with the Business owner teams and Functional/Data analysts in the entire development/BAU process. Work closely with AIX support, DBA support teams for access privileges and storage issues etc. work closely with the Batch Operations team and MFT teams for file transfer issues. Migration of Oracle to Hadoop eco system: Must have working experience in Hadoop eco system elements like HDFS,MapReduce,YARN etc. Must have working knowledge on Scala & Spark Dataframes to convert the existing code to Hadoop data lakes. Must have design and development experience in data pipeline solutions from different source systems (FILES, Oracle) to data lakes. Must have involved in creating/designing Hive tables and loading analyzing data using hive queries. Must have knowledge in creating Hive partitions, Dynamic partitions and buckets. Must have knowledge in CA Workload Automation DE 12.2 to create jobs and scheduling. Use Denodo for Data virtualization to the required data access for end users. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
12.0 - 15.0 years
13 - 17 Lacs
Mumbai
Work from Office
Job Information Job Opening ID ZR_1688_JOB Date Opened 24/12/2022 Industry Technology Job Type Work Experience 12-15 years Job Title Big Data Architect City Mumbai Province Maharashtra Country India Postal Code 400008 Number of Positions 4 LocationMumbai, Pune, Chennai, Hyderabad, Coimbatore, Kolkata 12+ Years experience in Big data Space across Architecture, Design, Development, testing & Deployment, full understanding in SDLC. 1. Experience of Hadoop and related technology stack experience 2. Experience of the Hadoop Eco-system(HDP+CDP) / Big Data (especially HIVE) Hand on experience with programming languages such as Java/Scala/python Hand-on experience/knowledge on Spark3. Being responsible and focusing on uptime and reliable running of all or ingestion/ETL jobs4. Good SQL and used to work in a Unix/Linux environment is a must.5. Create and maintain optimal data pipeline architecture.6. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.7. Good to have cloud experience8. Good to have experience for Hadoop integration with data visualization tools like PowerBI. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
2.0 - 4.0 years
5 - 8 Lacs
Chennai
Work from Office
Job Information Job Opening ID ZR_2458_JOB Date Opened 05/04/2025 Industry IT Services Job Type Work Experience 2-4 years Job Title Java Full-Stack Developer City Chennai Province Tamil Nadu Country India Postal Code 600094 Number of Positions 1 Contract Duration1 year (can be extendable) Responsibilities Design and develop new features using Full-stack development (Java/Spring/React/Angular/Mysql) for a cloud(AWS/others) and mobile product application in SOA/microservices architecture. Design awesome features and continuously improve them by exploring alternatives / technologies to make design improvements. Performance testing with Gatling (Scala). Work with CI/CD pipeline and tools (Docker, Ansible) to improve build and deployment process. Working with QA to ensure the quality and timing of new release deployments. Skills/Experience Good coding/problem solving skills and interest in learning new things will be the key. Time /Training will be provided to learn new technologies/tools. 2 or more years of professional experience in building web/mobile applications using Java or similar technologies (C#, Ruby, Python, Elixir, NodeJS). Experience in Spring Framework or similar frameworks. Experience in any DB (SQL/noSQL) Any experience in front-end development using React/Vue/Angular/similar frameworks. Any experience with Java/similar testing frameworks (JUnit/Mocks etc). check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
7.0 - 9.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Job Information Job Opening ID ZR_2162_JOB Date Opened 15/03/2024 Industry Technology Job Type Work Experience 7-9 years Job Title Sr Data Engineer City Bangalore Province Karnataka Country India Postal Code 560004 Number of Positions 5 Mandatory Skills: Microsoft Azure, Hadoop, Spark, Databricks, Airflow, Kafka, Py spark RequirmentsExperience working with distributed technology tools for developing Batch and Streaming pipelines using. SQL, Spark, Python Airflow Scala Kafka Experience in Cloud Computing, e.g., AWS, GCP, Azure, etc. Able to quickly pick up new programming languages, technologies, and frameworks. Strong skills building positive relationships across Product and Engineering. Able to influence and communicate effectively, both verbally and written, with team members and business stakeholders Experience with creating/ configuring Jenkins pipeline for smooth CI/CD process for Managed Spark jobs, build Docker images, etc. Working knowledge of Data warehousing, Data modelling, Governance and Data Architecture Experience working with Data platforms, including EMR, Airflow, Data bricks (Data Engineering & Delta Lake components) Experience working in Agile and Scrum development process. Experience in EMR/ EC2, Data bricks etc. Experience working with Data warehousing tools, including SQL database, Presto, and Snowflake Experience architecting data product in Streaming, Server less and Microservices Architecture and platform. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
6.0 - 10.0 years
3 - 7 Lacs
Chennai
Work from Office
Job Information Job Opening ID ZR_2199_JOB Date Opened 15/04/2024 Industry Technology Job Type Work Experience 6-10 years Job Title Sr Data Engineer City Chennai Province Tamil Nadu Country India Postal Code 600004 Number of Positions 4 Strong experience in Python Good experience in Databricks Experience working in AWS/Azure Cloud Platform. Experience working with REST APIs and services, messaging and event technologies. Experience with ETL or building Data Pipeline tools Experience with streaming platforms such as Kafka. Demonstrated experience working with large and complex data sets. Ability to document data pipeline architecture and design Experience in Airflow is nice to have To build complex Deltalake check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
6.0 - 10.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Job Information Job Opening ID ZR_2470_JOB Date Opened 03/05/2025 Industry IT Services Job Type Work Experience 6-10 years Job Title Sr. Data Engineer City Bangalore South Province Karnataka Country India Postal Code 560050 Number of Positions 1 Were looking for an experienced Senior Data Engineer to lead the design and development of scalable data solutions at our company. The ideal candidate will have extensive hands-on experience in data warehousing, ETL/ELT architecture, and cloud platforms like AWS, Azure, or GCP. You will work closely with both technical and business teams, mentoring engineers while driving data quality, security, and performance optimization. Responsibilities: Lead the design of data warehouses, lakes, and ETL workflows. Collaborate with teams to gather requirements and build scalable solutions. Ensure data governance, security, and optimal performance of systems. Mentor junior engineers and drive end-to-end project delivery.: 6+ years of experience in data engineering, including at least 2 full-cycle datawarehouse projects. Strong skills in SQL, ETL tools (e.g., Pentaho, dbt), and cloud platforms. Expertise in big data tools (e.g., Apache Spark, Kafka). Excellent communication skills and leadership abilities.PreferredExperience with workflow orchestration tools (e.g., Airflow), real-time data,and DataOps practices. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
5.0 - 8.0 years
2 - 5 Lacs
Pune, Haveli
Work from Office
Job Information Job Opening ID ZR_2415_JOB Date Opened 04/02/2025 Industry IT Services Job Type Work Experience 5-8 years Job Title Scala Developer City Haveli Province Pune Country India Postal Code 411057 Number of Positions 1 As an experienced member of our Core banking Base Development / Professional Service Group, you will be responsible for effective Microservice development in Scala and delivery of our NextGen transformation / professional services projects/programs. What You Will Do: Adhere the processes followed for development in the program. Report status, and proactively identify issues to the Tech Lead and management team. Personal ownership and accountability for delivering assigned tasks and deliverableswithin the established schedule. Facilitate a strong and supportive team environment that enables the team as well asindividual team members to overcome any political, bureaucratic and/or resourcebarriers to participation. Recommend and Implement solutions. Be totally hands on and have the ability towork independently. What You Will Need to Have: 4 to 8 yearsof recent hands-on in Scala and Akka Framework Technical Skillset required o Should possess Hands-on experience in Scala development including AkkaFramework.o Must have good understanding on Akka Streams.o Test driven development.o Awareness on message broker.o Hands-on Experience in design and development of Microservices.o Good awareness on Event driven Microservices Architecture.o GRPC Protocol + Protocol Buffers.o Hands-on Experience in Docker Containers.o Hands-on Experience in Kubernetes.o Awareness on cloud native applications.o Jira, Confluence, Ansible, Terraform.o Good knowledge of the cloud platforms (preferably AWS), their IaaS, PaaS,SaaS solutions.o Good knowledge and hands on experience on the scripting languages likeBatch, Bash, hands on experience on Python would be a plus.o Knowledge of Integration and unit test and Behavior Driven Developmento Need to have good problem-solving skills.o Good communication skills.What Would Be Great to Have: Experience integrating to third party applications. Agile knowledge Good understanding of the configuration management Financial Industry and Core Banking integration experience check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
5.0 - 8.0 years
2 - 5 Lacs
Chennai
Work from Office
Job Information Job Opening ID ZR_2168_JOB Date Opened 10/04/2024 Industry Technology Job Type Work Experience 5-8 years Job Title AWS Data Engineer City Chennai Province Tamil Nadu Country India Postal Code 600002 Number of Positions 4 Mandatory Skills: AWS, Python, SQL, spark, Airflow, SnowflakeResponsibilities Create and manage cloud resources in AWS Data ingestion from different data sources which exposes data using different technologies, such asRDBMS, REST HTTP API, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations Develop an infrastructure to collect, transform, combine and publish/distribute customer data. Define process improvement opportunities to optimize data collection, insights and displays. Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible Identify and interpret trends and patterns from complex data sets Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders. Key participant in regular Scrum ceremonies with the agile teams Proficient at developing queries, writing reports and presenting findings Mentor junior members and bring best industry practices check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
5.0 - 8.0 years
2 - 6 Lacs
Bengaluru
Work from Office
Job Information Job Opening ID ZR_1628_JOB Date Opened 09/12/2022 Industry Technology Job Type Work Experience 5-8 years Job Title Data Engineer City Bangalore Province Karnataka Country India Postal Code 560001 Number of Positions 4 Roles and Responsibilities: 4+ years of experience as a data developer using Python Knowledge in Spark, PySpark preferable but not mandatory Azure Cloud experience (preferred) Alternate Cloud experience is fine preferred experience in Azure platform including Azure data Lake, data Bricks, data Factory Working Knowledge on different file formats such as JSON, Parquet, CSV, etc. Familiarity with data encryption, data masking Database experience in SQL Server is preferable preferred experience in NoSQL databases like MongoDB Team player, reliable, self-motivated, and self-disciplined check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
4.0 years
0 Lacs
Gurgaon, Haryana, India
Remote
About This Role BlackRock Overview: BlackRock is one of the world’s preeminent asset management firms and a premier provider of global investment management, risk management and advisory services to institutional, intermediary and individual investors around the world. BlackRock offers a range of solutions — from rigorous fundamental and quantitative active management approaches aimed at maximizing outperformance to highly efficient indexing strategies designed to gain broad exposure to the world’s capital markets. Our clients can access our investment solutions through a variety of product structures, including individual and institutional separate accounts, mutual funds and other pooled investment vehicles, and the industry-leading iShares® ETFs. Aladdin Financial Engineering Group (AFE) AFE is a diverse and global team with a keen interest and expertise in all things related to technology and financial analytics. The group is responsible for the research and development of quantitative financial and behavioral models and tools across many different areas – single-security pricing, prepayment models, risk, return attribution, liquidity, optimization and portfolio construction, scenario analysis and simulations, etc. – and covering all asset classes. The group is also responsible for the technology platform that delivers those models to our internal partners and external clients, and their integration with Aladdin. AFE conducts leading research on the areas above, delivering state-of-the-art models. AFE publishes applied scientific research frequently, and our members present regularly at leading industry conferences. AFE engages constantly with the sales team in client visits and meetings. Job Description You can help conduct research to build quantitative financial models and portfolio analytics that help managing most of the money of the world’s largest asset manager. You can bring all yourself to the job. From the top of the firm down we embrace the values, identities and ideas brought by our employees. We are looking for curious people with a strong background in quantitative research, data science and machine learning, have awesome problem-solving skills, insatiable appetite for learning and innovating, adding to BlackRock’s vibrant research culture. If any of this excites you, we are looking to expand our team. We currently have quant researcher role with the AFE Investment AI (IAI) Team. The securities market is undergoing a massive transformation as the industry is embracing machine learning and, more broadly, AI, to help evolve the investment process. Pioneering this journey at BlackRock, the team has better deliver applied AI investment analytics to help both BlackRock and Aladdin clients achieve scale through automation while safeguarding alpha generation. The IAI team combines AI / ML methodology and technology skills with deep subject matter expertise in fixed income, equity, and multi-asset markets, and the buyside investment process. We are building next generation liquidity, security similarity and pricing models leveraging our expertise in quantitative research, data science and machine learning. The models we build use innovative machine learning approaches, have real practical value and are used by traders and portfolio managers alike. Our models use cutting edge econometric/statistical methods and tools. The models themselves have real practical value and are used by traders, portfolio managers and risk managers representing different investment styles (fundamental vs. quantitative) and across different investment horizons. Research is conducted predominantly in Python and Scala, and implemented into production by a separate, dedicated team of developers. These models have a huge footprint of usage across the entire Aladdin client base, and so we place special emphasis on scalability and ensuring adherence to BlackRock’s rigorous standards of model governance and control. Background And Responsibilities We are looking to hire a quant researcher with 4+ years’ experience to join AFE Investment AI team focusing on Trading and Liquidity to work closely with other data scientists/researchers to support Risk Mangers, Portfolio Managers and Traders. We build cutting edge liquidity analytics using a wide range of ML algos and a broad array of technologies (Python, Scala, Spark/Hadoop, GCP, Azure). This role is a great opportunity to work closely with the Portfolio Managers, Risk Managers and Trading team, spanning areas such as: Perform analysis of large data sets comprising of market data, trading data and derived analytics. Evaluate trading data, including pre-processing, feature engineering, variable selection, dimensionality reduction etc. Leverage machine learning to extract insights from data and work with investment managers to put those into action. Design, develop models/ML solutions for Trading & Liquidity. Implement the model and integrate the model into Aladdin analytical system in accordance with BlackRock’s model governance policy. Qualifications B.Tech / B.E. / M.Sc. degree in a quantitative discipline (Mathematics, Physics, Computer Science, Finance or similar area). M.Tech. / PhD is a plus. Strong background in Mathematics, Statistics, Probability, Linear Algebra Knowledgeable about data mining, data analytics, data modeling Confident in building models to solve problems including time series forecasting, clustering problems and hands on experience with a range of statistical and machine learning approaches. Ability to work independently and efficiently in a fast-paced and team-oriented environment. Knowledge of fixed income and credit instruments and markets a plus. Previous experience or knowledge in market liquidity is not required but a big plus. For professionals with no prior financial industry experience, this position is a unique opportunity to gain in-depth knowledge of the asset management process in a world-class organization. Our Benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about. Our hybrid work model BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. About BlackRock At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress. This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com/company/blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law. Show more Show less
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title : Senior Data Analyst Location : Pune, India About This Role Comscore India is looking for a new senior data analyst who will extract, transform, and analyse data, find insights and answer questions about the content of the data. This person will perform analytical hypothesis testing and modeling to provide key insight to internal and external clients. Other responsibilities include: supporting sales function by providing data expertise, feasibility reviews, and detailed analysis of questions raised by internal and external stakeholders. This more senior role involves the creation and innovation of Comscore’s offerings to the marketplace and are responsible for managing and leading cross-functional teams of analysts. Some roles create and maintain internal and external user interface and reporting tools. These roles cross all Comscore product areas, including digital, mobile, OTT, and TV. What You’ll Do Autonomously initiate and manage cross-functional projects Present findings to internal and external clients, peers, and upper management Work with cross-functional teams to implement QA methods; may work on improving user experience Participate in sales calls with clients to discuss potential custom research initiatives Provide support for pre-sales initiatives Manage the execution of custom research projects from outlined service order to completion Manage and perform multiple tasks under conditions of fluctuating workloads, competing requirements, and changing deadlines Identify process efficiencies and automation opportunities Validate custom analytics with external sources; understanding similarities and can explain differences Develop and enhance assigned products Mentor and train new team members What You’ll Need 4-8 years of related experience in Data Mining, SQL,Python,PySpark,Scala. 1-2 years of experience with Comscore’s offerings and research methods, and/or comparable experience in market research 1-2 years of experience managing projects Ability to partner, influence and impact others Shift Timing The regular hours for this position will cover a combination of business hours in the US and India – typically 2pm-11pm IST. Occasionally, later hours may be required for meetings with teams in other parts of the world. Additionally, for the first 4-6 weeks of onboarding and training, US Eastern time hours (IST -9:30) may be required. Benefits Medical Insurance coverage is provided to our employees and their dependants, 100% covered by Comscore; Provident Fund is borne by Comscore, and is provided over and above the gross salary to employees; 26 Annual leave days per annum, divided into 8 Casual leave days and 18 Privilege leave days; Comscore also provides a paid “Recharge Week” over the Christmas and New Year period, so that you can start the new year fresh; In addition, you will be entitled to: 10 Public Holidays; 10 Sick leave days; 5 Paternity leave days; 1 Birthday leave day. Flexible work arrangements; “Summer Hours” are offered from March to May: Comscore offers employees the flexibility to work more hours from Monday to Thursday, and the hours can be offset on Friday from 2:00pm onwards; Employees are eligible to participate in Comscore’s Sodexo Meal scheme and enjoy tax benefits; About Comscore At Comscore, we’re pioneering the future of cross-platform media measurement, arming organizations with the insights they need to make decisions with confidence. Central to this aim are our people who work together to simplify the complex on behalf of our clients & partners. Though our roles and skills are varied, we’re united by our commitment to five underlying values: Integrity, Velocity, Accountability, Teamwork, and Servant Leadership. If you’re motivated by big challenges and interested in helping some of the largest and most important media properties and brands navigate the future of media, we’d love to hear from you. Comscore (NASDAQ: SCOR) is a trusted partner for planning, transacting and evaluating media across platforms. With a data footprint that combines digital, linear TV, over-the-top and theatrical viewership intelligence with advanced audience insights, Comscore allows media buyers and sellers to quantify their multiscreen behavior and make business decisions with confidence. A proven leader in measuring digital and set-top box audiences and advertising at scale, Comscore is the industry’s emerging, third-party source for reliable and comprehensive cross-platform measurement. To learn more about Comscore, please visit Comscore.com. Comscore is committed to creating an inclusive culture, encouraging diversity. Show more Show less
Posted 1 week ago
4.0 - 9.0 years
9 - 19 Lacs
Pune
Work from Office
We are seeking a Data Engineer with strong expertise in Microsoft Fabric and Databricks to support our enterprise data platform initiatives. Role: Data Engineer Microsoft Fabric & Databricks Location: Pune/ Remote Key Responsibilities: • Develop and maintain scalable data platforms using Microsoft Fabric for BI and Databricks for real-time analytics. • Build robust data pipelines for SAP, MS Dynamics, and other cloud/on-prem sources. • Design enterprise-scale Data Lakes and integrate structured/unstructured data. • Optimize algorithms developed by data scientists and ensure platform reliability. • Collaborate with data scientists, architects, and business teams in a global environment. • Perform general administration, security, and monitoring of data platforms. Mandatory Skills: • Experience with Microsoft Fabric (Warehouse, Lakehouse, Data Factory, DataFlow Gen2, Semantic Models) and/or Databricks (Apache Spark). • Strong background in Python, SQL (Scala is a plus), and API integration. • Hands-on experience with Power BI and various database technologies (RDBMS, OLAP, Time Series). • Experience working with large datasets, preferably in an industrial or enterprise environment. • Proven skills in performance tuning, data modeling, data mining, and cloud security (Azure preferred). Nice to Have: • Knowledge of Azure data services (Storage, Networking, Billing, Security). • Experience with DevOps, agile software development, and working in international/multicultural teams. Candidate Requirements: • 4+ years of experience as a data engineer. • Bachelors or Masters degree in Computer Science, Information Systems, or related fields. • Strong problem-solving skills and a high attention to detail. • Proficiency in English (written and verbal) Please share your resume at Neesha1@damcogroup.com
Posted 1 week ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Hyderabad, Telangana, India; Bengaluru, Karnataka, India; Gurugram, Haryana, India . Minimum qualifications: Bachelor's degree in Computer Science, Mathematics, a related technical field, or equivalent practical experience. 3 years of experience in building Machine Learning or Data Science solutions. Experience in Python, Scala, R, or related, with data structures, algorithms, and software design. Ability to travel up to 30% of the time as needed. Preferred qualifications: Experience with recommendation engines, data pipelines, or distributed machine learning with data analytics, data visualization techniques and software, and deep learning frameworks. Experience in software development, professional services, solution engineering, technical consulting with architecting and rolling out new technology and solution initiatives. Experience with Data Science techniques. Knowledge of data warehousing concepts, including data warehouse technical architectures, infrastructure components, Extract, Transform, and Load/Extract, Load and Transform (ETL/ELT) and reporting tools and environments. Knowledge of cloud computing, including virtualization, hosted services, multi-tenant cloud infrastructures, storage systems, and content delivery networks. Excellent communication skills. About the job The Google Cloud Platform team helps customers transform and build what's next for their business — all with technology built in the cloud. Our products are developed for security, reliability and scalability, running the full stack from infrastructure to applications to devices and hardware. Our teams are dedicated to helping our customers — developers, small and large businesses, educational institutions and government agencies — see the benefits of our technology come to life. As part of an entrepreneurial team in this rapidly growing business, you will play a key role in understanding the needs of our customers and help shape the future of businesses of all sizes use technology to connect with customers, employees and partners. In this role, you will play a role in ensuring that customers have the best experience moving to the Google Cloud machine learning (ML) suite of products. You will design and implement machine learning solutions for customer use cases, leveraging core Google products. You will work with customers to identify opportunities to transform their business with machine learning, and will travel to customer sites to deploy solutions and deliver workshops designed to educate and empower customers to realize the potential of Google Cloud. You will have access to Google’s technology to monitor application performance, debug and troubleshoot product code, and address customer and partner needs.You will lead the execution of adopting the Google Cloud Platform solutions to the customer’s requirements.Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Deliver big data and machine learning solutions and solve technical customer tests. Act as a trusted technical advisor to Google’s customers. Identify new product features and feature gaps, provide guidance on existing product tests, and collaborate with Product Managers and Engineers to influence the roadmap of Google Cloud Platform. Deliver recommendations, tutorials, blog articles, and technical presentations adapting to different levels of business and technical stakeholders. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form . Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description What We Do At Goldman Sachs, we connect people, capital and ideas to help solve problems for our clients. We are a leading global financial services firm providing investment banking, securities and investment management services to a substantial and diversified client base that includes corporations, financial institutions, governments and individuals. Who We Look For Goldman Sachs Engineers are innovators and problem-solvers, building solutions in risk management, big data, mobile and more. We look for creative collaborators who evolve, adapt to change, and thrive in a fast-paced global environment. Role Summary Our clients increasingly require high throughput straight through processing systems handling large amounts of data, which is driving a re-architecture of our current platforms. The Asset Servicing Technology team is looking for software developers who can deliver technology solutions to enable new business activities, allow businesses to scale up, and satisfy the needs of a wide range of functions. We are a highly innovative team building an architecture from scratch, and we take full ownership of our systems. The projects that you work on will contribute to creating a set of platforms, services and APIs that can be used for multiple business needs, and which you will help design, build and deliver. This role will enable you to grow technical depth and expertise, and you will be working closely with senior technical experts. You will also have the opportunity to gain an in-depth understanding of Corporate Actions life cycle. The technologies we use are broad, with our technology stack consisting of at least Java, React, Redux, MongoDB, SQL, Python. A willingness to learn new languages, technologies, and the business, and apply best software development practices will be key to your success in this role. Key Responsibilities Design, develop and maintain complex software systems and applications Collaborate with cross-functional teams to gather requirements and define technical solutions Implement and maintain best practices for software development and engineering processes Develop and maintain software documentation, including design specifications, user guides and manuals Ensure the reliability, scalability, and performance of software systems Troubleshoot and debug complex software issues Mentor and coach junior engineers Qualifications 3+ years professional experience coding in any language (Java, JavaScript, Python, C#, Scala, C++). Experience in relational or NoSql databases. Strong technical, analytical, and communication skills. Willingness to learn and apply new technical and functional skills. Self-starter. Goldman Sachs Engineering Culture At Goldman Sachs, we commit our people, capital and ideas to help our clients, shareholders and the communities we serve to grow. Founded in 1869, we are a leading global investment banking, securities and investment management firm. Headquartered in New York, we maintain offices around the world. We believe who you are makes you better at what you do. We're committed to fostering and advancing diversity and inclusion in our own workplace and beyond by ensuring every individual within our firm has a number of opportunities to grow professionally and personally, from our training and development opportunities and firmwide networks to benefits, wellness and personal finance offerings and mindfulness programs. Learn more about our culture, benefits, and people at GS.com/careers. We’re committed to finding reasonable accommodations for candidates with special needs or disabilities during our recruiting process. Learn more: https://www.goldmansachs.com/careers/footer/disability-statement.html © The Goldman Sachs Group, Inc., 2025. All rights reserved. Goldman Sachs is an equal employment/affirmative action employer Female/Minority/Disability/Veteran/Sexual Orientation/Gender Identity Show more Show less
Posted 1 week ago
6.0 - 11.0 years
25 - 35 Lacs
Pune
Hybrid
Position - Data Engineer Location - Pune Experience - 6+ years Must Have: Tech-savvy engineer - willing and able to learn new skills, track industry trend 5+ years of total experience of solid data engineering experience, especially in open-source, data-intensive, distributed environments with experience in Big data-related technologies like Spark, Hive, HBase, Scala, etc. Programming background preferred Scala / Python. Experience in Scala, Spark, PySpark and Java (Good to have). Experience in migration of data to AWS or any other cloud. Experience in SQL and NoSQL databases. Optional: Model the data set from Teradata to the cloud. Experience in Building ETL Pipelines Experience in Building Data pipelines in AWS (S3, EC2, EMR, Athena, Redshift) or any other cloud. Self-starter & resourceful personality with the ability to manage pressure situations Exposure to Scrum and Agile Development Best Practices Experience working with geographically distributed teams Role & Responsibilities: Build Data and ETL pipelines in AWS Support migration of data to the cloud using Big Data Technologies like Spark, Hive, Talend, Python Interact with customers on a daily basis to ensure smooth engagement Responsible for timely and quality deliveries. Fulfill organization responsibilities – Sharing knowledge and experience within the other groups in the organization, conducting various technical sessions and training.
Posted 1 week ago
0.0 - 3.0 years
3 - 5 Lacs
Mumbai
Work from Office
Responsibilities: Designing, developing and maintaining core system features, services and engines Collaborating with a cross functional team of the backend, Mobile application, AI, signal processing, robotics Engineers, Design, Content, and Linguistic Team to realize the requirements of conversational social robotics platform which includes investigate design approaches, prototype new technology, and evaluate technical feasibility Ensure the developed backend infrastructure is optimized for scale and responsiveness Ensure best practices in design, development, security, monitoring, logging, and DevOps adhere to the execution of the project. Introducing new ideas, products, features by keeping track of the latest developments and industry trends Operating in an Agile/Scrum environment to deliver high quality software against aggressive schedules Requirements Proficiency in distributed application development lifecycle (concepts of authentication/authorization, security, session management, load balancing, API gateway), programming techniques and tools (application of tested, proven development paradigms) Proficiency in working on Linux based Operating system. Proficiency in at least one server-side programming language like Java. Additional languages like Python and PHP are a plus Proficiency in at least one server-side framework like Servlets, Spring, java spark (Java). Proficient in using ORM/Data access frameworks like Hibernate,JPA with spring or other server-side frameworks. Proficiency in at least one data serialization framework: Apache Thrift, Google ProtoBuffs, Apache Avro,Google Json,JackSon etc. Proficiency in at least one of inter process communication frameworks WebSocket's, RPC, message queues, custom HTTP libraries/frameworks ( kryonet, RxJava ), etc. Proficiency in multithreaded programming and Concurrency concepts (Threads, Thread Pools, Futures, asynchronous programming). Experience defining system architectures and exploring technical feasibility tradeoffs (architecture, design patterns, reliability and scaling) Experience developing cloud software services and an understanding of design for scalability, performance and reliability Good understanding of networking and communication protocols, and proficiency in identification CPU, memory I/O bottlenecks, solve read write-heavy workloads. Proficiency is concepts of monolithic and microservice architectural paradigms. Proficiency in working on at least one of cloud hosting platforms like Amazon AWS, Google Cloud, Azure etc. Proficiency in at least one of database SQL, NO-SQL, Graph databases like MySQL, MongoDB, Orientdb Proficiency in at least one of testing frameworks or tools JMeter, Locusts, Taurus Proficiency in at least one RPC communication framework: Apache Thrift, GRPC is an added plus Proficiency in asynchronous libraries (RxJava), frameworks (Akka),Play,Vertx is an added plus Proficiency in functional programming ( Scala ) languages is an added plus Proficiency in working with NoSQL/graph databases is an added plus Proficient understanding of code versioning tools, such as Git is an added plus Working Knowledge of tools for server, application metrics logging and monitoring and is a plus Monit, ELK, graylog is an added plus Working Knowledge of DevOps containerization utilities like Ansible, Salt, Puppet is an added plus Working Knowledge of DevOps containerization technologies like Docker, LXD is an added plus Working Knowledge of container orchestration platform like Kubernetes is an added plus
Posted 1 week ago
4.0 - 8.0 years
7 - 12 Lacs
Bengaluru
Work from Office
Software Engineer ( MX Dashboard ) - Ruby on Rails/Python/Java/Scala/NodeJS/ JavaScript, 4+years of Experience About the Role: Our Dashboard teams build and maintain our Web applications, which manage millions of network devices from our cloud. Our customers use the Meraki Dashboard to monitor and configure critical IT infrastructure that serves tens of millions of people every day. As a Software Engineer on MX Dashboard team, you will collaborate with firmware and other Backend/SRE/Dashboard engineers to architect, design, and build a large-scale system running MX SDWAN & Security features. You will enable connections between over a million network nodes and our SDWAN & Security customers relying on our products to serve tens of millions of people. With the large footprint that we have, quality is our highest priority. MX Dashboard team is responsible for delivering a simple to use but very powerful, scalable, and groundbreaking cloud-managed service to customers. With help from product managers and firmware engineers, you will construct intuitive but powerful systems that will be used by customers via the Meraki Dashboard. What you will work on: Solve challenging architecture problems to build scalable and extendable systems. Work with firmware engineers and PM to build intuitive and powerful workflows to handle containers. Coordinate and align knowledge and opinions between firmware, SRE, and Dashboard developers. With the help of other engineers, implement sophisticated Backend & Dashboard systems to handle MX SDWAN & Security solutions. Identify and solve performance bottlenecks in our Backend architecture. Take complete ownership from conception to production release by leveraging your ability to influence, facilitate, and work collaboratively across teams. Lead, mentor, and spread best practices to other specialists on the team. You are an ideal fit if you have: 4 + years of experience writing professional production code and tests for large scale systems 3 + years of experience in Backend & Full Stack technologies Ruby on Rails/Python/Scala/Java/NodeJS/JavaScript. Can implement efficient database design and query performance in a relational database (Postgres, SQL) Experience with Container solutions (Kubernetes) Strategic and product-oriented approach with a desire to understand users Outstanding communication skills Bonus points for any of the following: Experience or interest in Security or Networking Experience in building rich web UIs with React (and Redux) Familiarity working with Observability tools like ELK, Grafana etc is a plus
Posted 1 week ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Hyderabad, Telangana, India; Bengaluru, Karnataka, India; Gurugram, Haryana, India . Minimum qualifications: Bachelor's degree in Computer Science, Mathematics, a related technical field, or equivalent practical experience. 3 years of experience in building Machine Learning or Data Science solutions. Experience in Python, Scala, R, or related, with data structures, algorithms, and software design. Ability to travel up to 30% of the time as needed. Preferred qualifications: Experience with recommendation engines, data pipelines, or distributed machine learning with data analytics, data visualization techniques and software, and deep learning frameworks. Experience in software development, professional services, solution engineering, technical consulting with architecting and rolling out new technology and solution initiatives. Experience with Data Science techniques. Knowledge of data warehousing concepts, including data warehouse technical architectures, infrastructure components, Extract, Transform, and Load/Extract, Load and Transform (ETL/ELT) and reporting tools and environments. Knowledge of cloud computing, including virtualization, hosted services, multi-tenant cloud infrastructures, storage systems, and content delivery networks. Excellent communication skills. About the job The Google Cloud Platform team helps customers transform and build what's next for their business — all with technology built in the cloud. Our products are developed for security, reliability and scalability, running the full stack from infrastructure to applications to devices and hardware. Our teams are dedicated to helping our customers — developers, small and large businesses, educational institutions and government agencies — see the benefits of our technology come to life. As part of an entrepreneurial team in this rapidly growing business, you will play a key role in understanding the needs of our customers and help shape the future of businesses of all sizes use technology to connect with customers, employees and partners. In this role, you will play a role in ensuring that customers have the best experience moving to the Google Cloud machine learning (ML) suite of products. You will design and implement machine learning solutions for customer use cases, leveraging core Google products. You will work with customers to identify opportunities to transform their business with machine learning, and will travel to customer sites to deploy solutions and deliver workshops designed to educate and empower customers to realize the potential of Google Cloud. You will have access to Google’s technology to monitor application performance, debug and troubleshoot product code, and address customer and partner needs.You will lead the execution of adopting the Google Cloud Platform solutions to the customer’s requirements.Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Deliver big data and machine learning solutions and solve technical customer tests. Act as a trusted technical advisor to Google’s customers. Identify new product features and feature gaps, provide guidance on existing product tests, and collaborate with Product Managers and Engineers to influence the roadmap of Google Cloud Platform. Deliver recommendations, tutorials, blog articles, and technical presentations adapting to different levels of business and technical stakeholders. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form . Show more Show less
Posted 1 week ago
7.0 - 12.0 years
25 - 30 Lacs
Hyderabad, Bengaluru
Hybrid
Cloud Data Engineer The Cloud Data Engineer will be responsible for developing the data lake platform and all applications on Azure cloud. Proficiency in data engineering, data modeling, SQL, and Python programming is essential. The Data Engineer will provide design and development solutions for applications in the cloud. Essential Job Functions: Understand requirements and collaborate with the team to design and deliver projects. Design and implement data lake house projects within Azure. Develop application lifecycle utilizing Microsoft Azure technologies. Participate in design, planning, and necessary documentation. Engage in Agile ceremonies including daily standups, scrum, retrospectives, demos, and code reviews. Hands-on experience with Python/SQL development and Azure data pipelines. Collaborate with the team to develop and deliver cross-functional products. Key Skills: a. Data Engineering and SQL b. Python c. PySpark d. Azure Data Lake and ADF e. Databricks f. CI/CD g. Strong communication Other Responsibilities: Document and maintain project artifacts. Maintain comprehensive knowledge of industry standards, methodologies, processes, and best practices. Complete training as required for Privacy, Code of Conduct, etc. Promptly report any known or suspected loss, theft, or unauthorized disclosure or use of PI to the General Counsel/Chief Compliance Officer or Chief Information Officer. Adhere to the company's compliance program. Safeguard the company's intellectual property, information, and assets. Other duties as assigned. Minimum Qualifications and Job Requirements: Bachelor's degree in Computer Science. 7 years of hands-on experience in designing and developing distributed data pipelines. 5 years of hands-on experience in Azure data service technologies. 5 years of hands-on experience in Python, SQL, Object-oriented programming, ETL, and unit testing. Experience with data integration with APIs, Web services, Queues. Experience with Azure DevOps and CI/CD as well as agile tools and processes including JIRA, Confluence. *Required: Azure data engineering associate and databricks data engineering certification
Posted 1 week ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Summary We are seeking a highly skilled and motivated Data Engineer to join our team. As a Data Engineer, you will play a crucial role in designing, building, and maintaining scalable data infrastructure, ensuring seamless data flow across systems, and enabling data-driven decision-making within the organization. Key Responsibilities Design, build, and maintain robust data pipelines to process structured and unstructured data [preferred experience across GCP, Azure etc] Develop and optimize ETL processes to ensure data integrity and availability. Collaborated with data scientists, analysts, and software engineers to implement data solutions. Build and manage scalable storage systems, ensuring data accessibility and security. Monitor, troubleshoot, and optimize data systems for performance and reliability. Support on data modeling, schema design, and database architecture. Required Skills And Qualifications Bachelor’s degree in Computer Science, Data Engineering, or related field. Proven experience as a Data Engineer or in a similar role. Proficiency in programming languages such as Python, Java, or Scala. Expertise in SQL and NoSQL databases. Hands-on experience with ETL tools and frameworks. Familiarity with Kafka. Knowledge of cloud platforms like AWS, Azure, or Google Cloud. Strong analytical and problem-solving skills. Excellent communication and teamwork abilities. Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Scala is a popular programming language that is widely used in India, especially in the tech industry. Job seekers looking for opportunities in Scala can find a variety of roles across different cities in the country. In this article, we will dive into the Scala job market in India and provide valuable insights for job seekers.
These cities are known for their thriving tech ecosystem and have a high demand for Scala professionals.
The salary range for Scala professionals in India varies based on experience levels. Entry-level Scala developers can expect to earn around INR 6-8 lakhs per annum, while experienced professionals with 5+ years of experience can earn upwards of INR 15 lakhs per annum.
In the Scala job market, a typical career path may look like: - Junior Developer - Scala Developer - Senior Developer - Tech Lead
As professionals gain more experience and expertise in Scala, they can progress to higher roles with increased responsibilities.
In addition to Scala expertise, employers often look for candidates with the following skills: - Java - Spark - Akka - Play Framework - Functional programming concepts
Having a good understanding of these related skills can enhance a candidate's profile and increase their chances of landing a Scala job.
Here are 25 interview questions that you may encounter when applying for Scala roles:
As you explore Scala jobs in India, remember to showcase your expertise in Scala and related skills during interviews. Prepare well, stay confident, and you'll be on your way to a successful career in Scala. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2