Jobs
Interviews

6030 Scala Jobs - Page 4

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Description GPP Database Link (https://cummins365.sharepoint.com/sites/CS38534/) Leads projects for design, development and maintenance of a data and analytics platform. Effectively and efficiently process, store and make data available to analysts and other consumers. Works with key business stakeholders, IT experts and subject-matter experts to plan, design and deliver optimal analytics and data science solutions. Works on one or many product teams at a time. Key Responsibilities Designs and automates deployment of our distributed system for ingesting and transforming data from various types of sources (relational, event-based, unstructured). Designs and implements framework to continuously monitor and troubleshoot data quality and data integrity issues. Implements data governance processes and methods for managing metadata, access, retention to data for internal and external users. Designs and provide guidance on building reliable, efficient, scalable and quality data pipelines with monitoring and alert mechanisms that combine a variety of sources using ETL/ELT tools or scripting languages. Designs and implements physical data models to define the database structure. Optimizing database performance through efficient indexing and table relationships. Participates in optimizing, testing, and troubleshooting of data pipelines. Designs, develops and operates large scale data storage and processing solutions using different distributed and cloud based platforms for storing data (e.g. Data Lakes, Hadoop, Hbase, Cassandra, MongoDB, Accumulo, DynamoDB, others). Uses innovative and modern tools, techniques and architectures to partially or completely automate the most-common, repeatable and tedious data preparation and integration tasks in order to minimize manual and error-prone processes and improve productivity. Assists with renovating the data management infrastructure to drive automation in data integration and management. Ensures the timeliness and success of critical analytics initiatives by using agile development technologies such as DevOps, Scrum, Kanban Coaches and develops less experienced team members. Responsibilities Competencies: System Requirements Engineering - Uses appropriate methods and tools to translate stakeholder needs into verifiable requirements to which designs are developed; establishes acceptance criteria for the system of interest through analysis, allocation and negotiation; tracks the status of requirements throughout the system lifecycle; assesses the impact of changes to system requirements on project scope, schedule, and resources; creates and maintains information linkages to related artifacts. Collaborates - Building partnerships and working collaboratively with others to meet shared objectives. Communicates effectively - Developing and delivering multi-mode communications that convey a clear understanding of the unique needs of different audiences. Customer focus - Building strong customer relationships and delivering customer-centric solutions. Decision quality - Making good and timely decisions that keep the organization moving forward. Data Extraction - Performs data extract-transform-load (ETL) activities from variety of sources and transforms them for consumption by various downstream applications and users using appropriate tools and technologies. Programming - Creates, writes and tests computer code, test scripts, and build scripts using algorithmic analysis and design, industry standards and tools, version control, and build and test automation to meet business, technical, security, governance and compliance requirements. Quality Assurance Metrics - Applies the science of measurement to assess whether a solution meets its intended outcomes using the IT Operating Model (ITOM), including the SDLC standards, tools, metrics and key performance indicators, to deliver a quality product. Solution Documentation - Documents information and solution based on knowledge gained as part of product development activities; communicates to stakeholders with the goal of enabling improved productivity and effective knowledge transfer to others who were not originally part of the initial learning. Solution Validation Testing - Validates a configuration item change or solution using the Function's defined best practices, including the Systems Development Life Cycle (SDLC) standards, tools and metrics, to ensure that it works as designed and meets customer requirements. Data Quality - Identifies, understands and corrects flaws in data that supports effective information governance across operational business processes and decision making. Problem Solving - Solves problems and may mentor others on effective problem solving by using a systematic analysis process by leveraging industry standard methodologies to create problem traceability and protect the customer; determines the assignable cause; implements robust, data-based solutions; identifies the systemic root causes and ensures actions to prevent problem reoccurrence are implemented. Values differences - Recognizing the value that different perspectives and cultures bring to an organization. Education, Licenses, Certifications College, university, or equivalent degree in relevant technical discipline, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Experience Intermediate experience in a relevant discipline area is required. Knowledge of the latest technologies and trends in data engineering are highly preferred and includes: 5-8 years of experince Familiarity analyzing complex business systems, industry requirements, and/or data regulations Background in processing and managing large data sets Design and development for a Big Data platform using open source and third-party tools SPARK, Scala/Java, Map-Reduce, Hive, Hbase, and Kafka or equivalent college coursework SQL query language Clustered compute cloud-based implementation experience Experience developing applications requiring large file movement for a Cloud-based environment and other data extraction tools and methods from a variety of sources Experience in building analytical solutions Intermediate Experiences In The Following Are Preferred Experience with IoT technology Experience in Agile software development Qualifications Work closely with business Product Owner to understand product vision. Play a key role across DBU Data & Analytics Power Cells to define, develop data pipelines for efficient data transport into Cummins Digital Core ( Azure DataLake, Snowflake). Collaborate closely with AAI Digital Core and AAI Solutions Architecture to ensure alignment of DBU project data pipeline design standards. Independently design, develop, test, implement complex data pipelines from transactional systems (ERP, CRM) to Datawarehouses, DataLake. Responsible for creation, maintenence and management of DBU Data & Analytics data engineering documentation and standard operating procedures (SOP). Take part in evaluation of new data tools, POCs and provide suggestions. Take full ownership of the developed data pipelines, providing ongoing support for enhancements and performance optimization. Proactively address and resolve issues that compromise data accuracy and usability. Preferred Skills Programming Languages: Proficiency in languages such as Python, Java, and/or Scala. Database Management: Expertise in SQL and NoSQL databases. Big Data Technologies: Experience with Hadoop, Spark, Kafka, and other big data frameworks. Cloud Services: Experience with Azure, Databricks and AWS cloud platforms. ETL Processes: Strong understanding of Extract, Transform, Load (ETL) processes. Data Replication: Working knowledge of replication technologies like Qlik Replicate is a plus API: Working knowledge of API to consume data from ERP, CRM

Posted 1 day ago

Apply

4.0 years

0 Lacs

Haryana, India

On-site

What do we do? The TTS Analytics team provides analytical insights to the Product, Pricing, Client Experience and Sales functions within the global Treasury & Trade Services business. The team works on business problems focused on driving acquisitions, cross-sell, revenue growth & improvements in client experience. The team extracts relevant insights, identifies business opportunities, converts business problems into analytical frameworks, uses big data tools and machine learning algorithms to build predictive models & other solutions, and designs go-to-market strategies for a huge variety of business problems. Role Description The role will be Data/Information Mgt Analyst 2 (C10) in the TTS Analytics team The role will report to the AVP/VP leading the team The role will involve working on multiple analyses through the year on business problems across the client life cycle – acquisition, engagement, client experience and retention – for the TTS business The work involves setting up and optimizing data pipelines using big data technologies such as PySpark, Scala, and Hive. The role will also include working with SQL and NoSQL databases (e.g., MongoDB) to manage and retrieve data effectively. The role requires designing and deploying interactive Tableau dashboards to visualize data insights and provide stakeholders with actionable information using features such as Tableau Prep Flows, Level of Detail (LOD) Expressions, Table Calculations etc. This will involve leveraging multiple analytical approaches, tools and techniques, working on multiple data sources (client profile & engagement data, transactions & revenue data, digital data, unstructured data like call transcripts etc.) to enable data driven insights to business and functional stakeholders Experience: Bachelor’s Degree with 4+ years of experience in data analytics, or Master’s Degree with 2+ years of experience in data analytics Must have: Marketing analytics experience Proficiency in designing and deploying Tableau dashboards Strong experience in data engineering and building data pipelines Experience with big data technologies such as PySpark, Scala, and Hive Proficiency in SQL and experience with various database systems (e.g., MongoDB) Good to have: Experience in financial services Experience across different analytical methods like hypothesis testing, segmentation, time series forecasting, test vs. control comparison etc. Skills: Analytical Skills: Strong analytical and problem-solving skills related to data manipulation and pipeline optimization Has the ability to work hands-on to retrieve and manipulate data from big data environments Ability to design efficient data models and schemas Tools and Platforms: Proficient in Python/R, SQL Experience in PySpark, Hive, and Scala Strong knowledge of SQL and NoSQL databases such as MongoDB etc. Proficiency with Tableau (designing and deploying advanced, interactive dashboards) Proficient in MS Office Tools such Excel and PowerPoint Soft Skills: Strong analytical and problem-solving skills Excellent communication and interpersonal skills Be organized, detail oriented, and adaptive to matrix work environment ------------------------------------------------------ Job Family Group: Decision Management ------------------------------------------------------ Job Family: Data/Information Management ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.

Posted 1 day ago

Apply

7.0 years

0 Lacs

Itanagar, Arunachal Pradesh, India

On-site

Job Overview We are seeking a highly skilled and experienced Lead Data Engineer AWS to spearhead the design, development, and optimization of our cloud-based data infrastructure. As a technical leader, you will drive scalable data solutions using AWS services and modern data engineering tools, ensuring robust data pipelines and architectures for real-time and batch data processing. Responsibilities The ideal candidate is a hands-on technologist with a deep understanding of distributed data systems, cloud-native data services, and team leadership in Agile Responsibilities : Design, build, and maintain scalable, fault-tolerant, and secure data pipelines using AWS-native services (e.g., Glue, EMR, Lambda, S3, Redshift, Athena, Kinesis). Lead end-to-end implementation of data architecture strategies including ingestion, storage, transformation, and data governance. Collaborate with data scientists, analysts, and application developers to understand data requirements and deliver optimal solutions. Ensure best practices for data quality, data cataloging, lineage tracking, and metadata management using tools like AWS Glue Data Catalog or Apache Atlas. Optimize data pipelines for performance, scalability, and cost-efficiency across structured and unstructured data sources. Mentor and lead a team of data engineers, providing technical guidance, code reviews, and architecture recommendations. Implement data modeling techniques (OLTP/OLAP), partitioning strategies, and data warehousing best practices. Maintain CI/CD pipelines for data infrastructure using tools such as AWS CodePipeline, Git, and Monitor production systems and lead incident response and root cause analysis for data infrastructure issues. Drive innovation by evaluating emerging technologies and proposing improvements to existing data platform Skills & Qualifications : Minimum 7 years of experience in data engineering with at least 3+ years in a lead or senior engineering role. Strong hands-on experience with AWS data services: S3, Redshift, Glue, Lambda, EMR, Athena, Kinesis, RDS, DynamoDB. Advanced proficiency in Python/Scala/Java for ETL development and data transformation logic. Deep understanding of distributed data processing frameworks (e.g., Apache Spark, Hadoop). Solid grasp of SQL and experience with performance tuning in large-scale environments. Experience implementing data lakes, lakehouse architecture, and data warehousing solutions on cloud. Knowledge of streaming data pipelines using Kafka, Kinesis, or AWS MSK. Proficiency with infrastructure-as-code (IaC) using Terraform or AWS CloudFormation. Experience with DevOps practices and tools such as Docker, Git, Jenkins, and monitoring tools (CloudWatch, Prometheus, Grafana). Expertise in data governance, security, and compliance in cloud environments (ref:hirist.tech)

Posted 1 day ago

Apply

3.0 - 7.0 years

0 Lacs

noida, uttar pradesh

On-site

As a Solution Implementation Manager at Crowe, you will play a crucial role in leading and executing implementation projects while closely collaborating with senior stakeholders and clients to deliver value to financial institution clients. Your responsibilities will involve leading teams of analysts, working independently on engagements, and implementing out-of-the-box/customization solutions mainly in the financial crime domain, requiring a strong understanding of anti-money laundering and the banking industry. Your qualifications and experience should include being a Certified CAMS or willing to obtain certification, having a minimum of 3 years of experience working on AML platforms such as Verafin, SAS, Oracle, Actimize AML, WLF, and Fraud, and familiarity with internally hosted or vendor-hosted cloud solutions. You should be well-versed in AWS and Google Cloud implementation of Solutions, with exposure to Docker, Github, UNIX, and Windows implementations. Proficiency in coding in Java, Python, and SQL is essential, and additional skills in Scala, SAS, Oracle, MsSQL, and data visualization tools like Tableau, MS Power BI, R Shiny would be advantageous. Understanding and experience in Machine Learning/AI is also desirable. Your role will involve setting and achieving deadlines and objectives, working on both external and internal projects, and possessing strong communication and interpersonal skills to engage effectively with company/client executives. You should be able to work collaboratively within a team and manage multiple projects simultaneously. In addition to technical skills, we expect you to embody Crowe's values of Care, Trust, Courage, and Stewardship, acting ethically and with integrity at all times. As a part of our inclusive culture that values diversity, you will have the opportunity to work with a Career Coach who will help guide you in achieving your career goals and aspirations. Crowe offers a comprehensive benefits package to its employees, recognizing that great people are at the core of a great firm. As you grow within the organization, you will have the opportunity to thrive in an environment that fosters talent and supports individual development. Crowe Horwath IT Services Private Ltd. is a wholly owned subsidiary of Crowe LLP (U.S.A.), a global public accounting, consulting, and technology firm with a presence across the world. Crowe LLP is an independent member firm of Crowe Global, a leading global accounting network comprising over 200 independent accounting and advisory firms in more than 130 countries. Please note that Crowe does not accept unsolicited candidates, referrals, or resumes from staffing agencies or third-party services without a prior agreement. Candidates not submitted through the appropriate channels will be considered the property of Crowe, and no fees will be charged for such submissions.,

Posted 1 day ago

Apply

100.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Overview Working at Atlassian Atlassians can choose where they work – whether in an office, from home, or a combination of the two. That way, Atlassians have more control over supporting their family, personal goals, and other priorities. We can hire people in any country where we have a legal entity. Interviews and onboarding are conducted virtually, a part of being a distributed-first company. Responsibilities Your future team To become a 100 year company, we need a world-class engineering organisation of empowered teams with the tools and infrastructure to do the best work of their careers. As a part of a unified R&D team, Engineering is prioritising key initiatives which support our customers as they increase their adoption of Atlassian Cloud products and services while continuing to support their current needs at extreme enterprise scale. We're looking for people who want to write the future and who believe that we can accomplish so much more together. You will report to one of the Engineering Managers of the R&D teams. What You'll Do Build and ship features and capabilities daily in highly scalable, cross-geo distributed environment Be part of an amazing open and collaborative work environment with other experienced engineers, architects, product managers, and designers Review code with best practices of readability, testing patterns, documentation, reliability, security, and performance considerations in mind Mentor and level up the skills of your teammates by sharing your expertise in formal and informal knowledge sharing sessions Ensure full visibility, error reporting, and monitoring of high performing backend services Participate in Agile software development including daily stand-ups, sprint planning, team retrospectives, show and tell demo sessions Your background 4+ years of experience building and developing backend applications Bachelor's or Master's degree with a preference for Computer Science degree Experience crafting and implementing highly scalable and performant RESTful micro-services Proficiency in any modern object-oriented programming language (e.g., Java, Kotlin, Go, Scala, Python, etc.) Fluency in any one database technology (e.g. RDBMS like Oracle or Postgres and/or NoSQL like DynamoDB or Cassandra) Strong understanding of CI/CD reliability principles, including test strategy, security, and performance benchmarking. Real passion for collaboration and strong interpersonal and communication skills Broad knowledge and understanding of SaaS, PaaS, IaaS industry with hands-on experience of public cloud offerings (AWS, GAE, Azure) Familiarity with cloud architecture patterns and an engineering discipline to produce software with quality Qualifications Benefits & Perks Atlassian offers a wide range of perks and benefits designed to support you, your family and to help you engage with your local community. Our offerings include health and wellbeing resources, paid volunteer days, and so much more. To learn more, visit go.atlassian.com/perksandbenefits . About Atlassian At Atlassian, we're motivated by a common goal: to unleash the potential of every team. Our software products help teams all over the planet and our solutions are designed for all types of work. Team collaboration through our tools makes what may be impossible alone, possible together. We believe that the unique contributions of all Atlassians create our success. To ensure that our products and culture continue to incorporate everyone's perspectives and experience, we never discriminate based on race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status. All your information will be kept confidential according to EEO guidelines. To provide you the best experience, we can support with accommodations or adjustments at any stage of the recruitment process. Simply inform our Recruitment team during your conversation with them. To learn more about our culture and hiring process, visit go.atlassian.com/crh .

Posted 1 day ago

Apply

4.0 - 8.0 years

0 Lacs

delhi

On-site

The ideal candidate should possess extensive expertise in SQL, data modeling, ETL/ELT pipeline development, and cloud-based data platforms like Databricks or Snowflake. You will be responsible for designing scalable data models, managing reliable data workflows, and ensuring the integrity and performance of critical financial datasets. Collaboration with engineering, analytics, product, and compliance teams is a key aspect of this role. Responsibilities: - Design, implement, and maintain logical and physical data models for transactional, analytical, and reporting systems. - Develop and oversee scalable ETL/ELT pipelines to process large volumes of financial transaction data. - Optimize SQL queries, stored procedures, and data transformations for enhanced performance. - Create and manage data orchestration workflows using tools like Airflow, Dagster, or Luigi. - Architect data lakes and warehouses utilizing platforms such as Databricks, Snowflake, BigQuery, or Redshift. - Ensure adherence to data governance, security, and compliance standards (e.g., PCI-DSS, GDPR). - Work closely with data engineers, analysts, and business stakeholders to comprehend data requirements and deliver solutions. - Conduct data profiling, validation, and quality assurance to maintain clean and consistent data. - Maintain comprehensive documentation for data models, pipelines, and architecture. Required Skills & Qualifications: - Proficiency in advanced SQL, including query tuning, indexing, and performance optimization. - Experience in developing ETL/ELT workflows with tools like Spark, dbt, Talend, or Informatica. - Familiarity with data orchestration frameworks such as Airflow, Dagster, Luigi, etc. - Hands-on experience with cloud-based data platforms like Databricks, Snowflake, or similar technologies. - Deep understanding of data warehousing principles like star/snowflake schema, slowly changing dimensions, etc. - Knowledge of cloud services (AWS, GCP, or Azure) and data security best practices. - Strong analytical and problem-solving skills in high-scale environments. Preferred Qualifications: - Exposure to real-time data pipelines like Kafka, Spark Streaming. - Knowledge of data mesh or data fabric architecture paradigms. - Certifications in Snowflake, Databricks, or relevant cloud platforms. - Familiarity with Python or Scala for data engineering tasks.,

Posted 1 day ago

Apply

3.0 - 7.0 years

0 Lacs

maharashtra

On-site

Avion manufactures Full Flight Simulators for the Airbus A320 Family and Boeing 737 NG & MAX. They operate Flight Training Centres at London Luton Airport and Mumbai, India. Gen24 Flybiz offers comprehensive services for aspiring pilots, airlines, and training organizations. In 2025, the Avion Flight Training Centre Mumbai - operated by Gen24 Flybiz - will be opened. At the facility, pilots can train on state-of-the-art Full Flight Simulators (FFS) and Flight Navigation Procedures Trainer (FNPTII) devices. Currently, the centre operates two Airbus A320neo Full Flight Simulators from Avion and an A320 FNPTII for APS MCC training, built by Simnest. Over the coming years, the plan is to expand to six to eight Full Flight Simulators, including additional Airbus A320s and Boeing 737 MAX devices, to provide comprehensive training solutions for airlines and individual pilots. Gen24 is in search of a Core Software Engineer to assist in developing core software for Full Flight Simulators. The core software enables distributed real-time simulation of all necessary models for the simulation. It allows user interaction with the simulation through the Instructor Operating System and generates simulated graphics for the cockpit displays. It also includes various Graphical User Interfaces (GUIs) used by developers and simulator maintenance personnel. Responsibilities include designing and developing supporting tools for the core framework such as real-time monitoring, graphical user interfaces, graphics generator editor, diagnostic tools, and mobile and web applications. The role also involves maintaining and upgrading key components of the core framework, including real-time scheduling, shared memory, multi-node syncing, graphics generator, and mobile and web applications. The ideal candidate should possess high analytical skills, the ability to translate high-level functional requirements into technical specifications, experience in software development in C++, Scala, Java, or related languages, familiarity with GUI development, preferably in JavaFX or QT, good verbal and written communication skills in English, a strong work ethic, and the ability to learn and adapt quickly. Desirable skills and experience include affinity with real-time simulation, distributed computing, multithreading, knowledge of data structures in memory and network protocols like UDP and TCP, familiarity with Object-Oriented Programming and Design Patterns, knowledge of Scala (or Java), OpenGL, reverse engineering of code, troubleshooting, full-stack web development, experience with Python, JavaScript, Scala, Svelte, markup languages (HTML, XML, LaTeX), web application design, and mobile application development. This job position is based at the Avion Flight Training Centre (operated by Gen24) in Mumbai, India. Working at Gen24 offers a challenging job in a successful and entrepreneurial environment with a high degree of freedom in acting. Collaboration within and between teams is essential, along with close cooperation with partners and customers. Gen24 provides support, training, and opportunities for personal development in a stimulating and inspiring environment. Gen24 values diversity and inclusivity, encouraging individuals from all backgrounds and perspectives to apply. They are committed to fostering an inclusive and transparent work environment where every voice is heard and acknowledged. If you believe you meet the criteria and are ready for a new challenge, Gen24 looks forward to hearing from you. You can apply through the Join.com webpage, including your motivation letter and resume.,

Posted 2 days ago

Apply

4.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

As a Big Data Architect specializing in Databricks at Codvo, a global empathy-led technology services company, your role is critical in designing sophisticated data solutions that drive business value for enterprise clients and power internal AI products. Your expertise will be instrumental in architecting scalable, high-performance data lakehouse platforms and end-to-end data pipelines, making you the go-to expert for modern data architecture in a cloud-first world. Your key responsibilities will include designing and documenting robust, end-to-end big data solutions on cloud platforms (AWS, Azure, GCP) with a focus on the Databricks Lakehouse Platform. You will provide technical guidance and oversight to data engineering teams on best practices for data ingestion, transformation, and processing using Spark. Additionally, you will design and implement effective data models and establish data governance policies for data quality, security, and compliance within the lakehouse. Evaluating and recommending appropriate data technologies, tools, and frameworks to meet project requirements and collaborating closely with various stakeholders to translate complex business requirements into tangible technical architecture will also be part of your role. Leading and building Proof of Concepts (PoCs) to validate architectural approaches and new technologies in the big data and AI space will be crucial. To excel in this role, you should have 10+ years of experience in data engineering, data warehousing, or software engineering, with at least 4+ years in a dedicated Data Architect role. Deep, hands-on expertise with Apache Spark and the Databricks platform is mandatory, including Delta Lake, Unity Catalog, and Structured Streaming. Proven experience architecting and deploying data solutions on major cloud providers, proficiency in Python or Scala, expert-level SQL skills, strong understanding of modern AI concepts, and in-depth knowledge of data warehousing concepts and modern Lakehouse patterns are essential. This position is remote and based in India with working hours from 2:30 PM to 11:30 PM. Join us at Codvo and be a part of a team that values Product innovation, mature software engineering, and core values like Respect, Fairness, Growth, Agility, and Inclusiveness each day to offer expertise, outside-the-box thinking, and measurable results.,

Posted 2 days ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

As a Java Developer with 5-7 years of experience, you will be responsible for designing and developing Java services using the Java Spring Boot framework. Your role will involve implementing, supporting, troubleshooting, and maintaining applications. Additionally, you will develop high-standard SAS/Python code and model documentation. You will work on the release cycle of modern, Java-based web applications and develop automation scripts using Python to assist in day-to-day team activities. It is essential to write efficient, reusable, and reliable Java code to meet project requirements. To qualify for this position, you should have a Bachelor's Degree in Computer Science or a related field. You must possess 5-7 years of experience in Java development and demonstrate strong expertise in Java/J2EE technologies. Proficiency in web frontend technologies like HTML, JavaScript, and CSS is required. Knowledge of Java frameworks such as Spring MVC and Spring Security, as well as experience with REST APIs and writing Python libraries, are necessary. Familiarity with databases like MySQL, Oracle, and SQL is essential, along with strong scripting skills in languages like Python, Perl, or Bash. Experience in backend programming with Java/Python/Scala and the ability to work on full-stack development using Java technologies are valuable assets. The ideal candidate will have strong Java programming skills, experience with Java Spring framework and Hibernate, and proficiency in developing microservices using Java. Knowledge of design patterns, Java frameworks, front-end and back-end Java technologies, automation tools like Selenium and Protractor, web services, RESTful APIs, and ORM frameworks like Hibernate/JPA is beneficial. Additional skills such as proficiency in Python or relevant scripting languages, experience in web/mobile application development, understanding of high-level JavaScript concepts, ability to work with automation tools for testing, and knowledge of machine learning, AI, or data science are considered advantageous. This is a full-time position with health insurance and Provident Fund benefits. The job location is in Bangalore, Karnataka, India, requiring in-person attendance on a day shift schedule from Monday to Friday. The ability to commute or relocate to Hyderabad, Telangana, with an employer-provided relocation package is preferred. A Master's degree is preferred for education qualifications. If you meet the specified requirements and are comfortable relocating to Bangalore, apply for this Java Developer position and showcase your expertise in Java development, Spring Boot, and Python to contribute effectively to the team.,

Posted 2 days ago

Apply

5.0 - 8.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Description GPP Database Link (https://cummins365.sharepoint.com/sites/CS38534/) Job Summary Leads projects for design, development and maintenance of a data and analytics platform. Effectively and efficiently process, store and make data available to analysts and other consumers. Works with key business stakeholders, IT experts and subject-matter experts to plan, design and deliver optimal analytics and data science solutions. Works on one or many product teams at a time. Key Responsibilities Designs and automates deployment of our distributed system for ingesting and transforming data from various types of sources (relational, event-based, unstructured). Designs and implements framework to continuously monitor and troubleshoot data quality and data integrity issues. Implements data governance processes and methods for managing metadata, access, retention to data for internal and external users. Designs and provide guidance on building reliable, efficient, scalable and quality data pipelines with monitoring and alert mechanisms that combine a variety of sources using ETL/ELT tools or scripting languages. Designs and implements physical data models to define the database structure. Optimizing database performance through efficient indexing and table relationships. Participates in optimizing, testing, and troubleshooting of data pipelines. Designs, develops and operates large scale data storage and processing solutions using different distributed and cloud based platforms for storing data (e.g. Data Lakes, Hadoop, Hbase, Cassandra, MongoDB, Accumulo, DynamoDB, others). Uses innovative and modern tools, techniques and architectures to partially or completely automate the most-common, repeatable and tedious data preparation and integration tasks in order to minimize manual and error-prone processes and improve productivity. Assists with renovating the data management infrastructure to drive automation in data integration and management. Ensures the timeliness and success of critical analytics initiatives by using agile development technologies such as DevOps, Scrum, Kanban Coaches and develops less experienced team members. Responsibilities Competencies: System Requirements Engineering - Uses appropriate methods and tools to translate stakeholder needs into verifiable requirements to which designs are developed; establishes acceptance criteria for the system of interest through analysis, allocation and negotiation; tracks the status of requirements throughout the system lifecycle; assesses the impact of changes to system requirements on project scope, schedule, and resources; creates and maintains information linkages to related artifacts. Collaborates - Building partnerships and working collaboratively with others to meet shared objectives. Communicates effectively - Developing and delivering multi-mode communications that convey a clear understanding of the unique needs of different audiences. Customer focus - Building strong customer relationships and delivering customer-centric solutions. Decision quality - Making good and timely decisions that keep the organization moving forward. Data Extraction - Performs data extract-transform-load (ETL) activities from variety of sources and transforms them for consumption by various downstream applications and users using appropriate tools and technologies. Programming - Creates, writes and tests computer code, test scripts, and build scripts using algorithmic analysis and design, industry standards and tools, version control, and build and test automation to meet business, technical, security, governance and compliance requirements. Quality Assurance Metrics - Applies the science of measurement to assess whether a solution meets its intended outcomes using the IT Operating Model (ITOM), including the SDLC standards, tools, metrics and key performance indicators, to deliver a quality product. Solution Documentation - Documents information and solution based on knowledge gained as part of product development activities; communicates to stakeholders with the goal of enabling improved productivity and effective knowledge transfer to others who were not originally part of the initial learning. Solution Validation Testing - Validates a configuration item change or solution using the Function's defined best practices, including the Systems Development Life Cycle (SDLC) standards, tools and metrics, to ensure that it works as designed and meets customer requirements. Data Quality - Identifies, understands and corrects flaws in data that supports effective information governance across operational business processes and decision making. Problem Solving - Solves problems and may mentor others on effective problem solving by using a systematic analysis process by leveraging industry standard methodologies to create problem traceability and protect the customer; determines the assignable cause; implements robust, data-based solutions; identifies the systemic root causes and ensures actions to prevent problem reoccurrence are implemented. Values differences - Recognizing the value that different perspectives and cultures bring to an organization. Education, Licenses, Certifications College, university, or equivalent degree in relevant technical discipline, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Experience Intermediate experience in a relevant discipline area is required. Knowledge of the latest technologies and trends in data engineering are highly preferred and includes: 5-8 years of experince Familiarity analyzing complex business systems, industry requirements, and/or data regulations Background in processing and managing large data sets Design and development for a Big Data platform using open source and third-party tools SPARK, Scala/Java, Map-Reduce, Hive, Hbase, and Kafka or equivalent college coursework SQL query language Clustered compute cloud-based implementation experience Experience developing applications requiring large file movement for a Cloud-based environment and other data extraction tools and methods from a variety of sources Experience in building analytical solutions Intermediate Experiences In The Following Are Preferred Experience with IoT technology Experience in Agile software development Qualifications Work closely with business Product Owner to understand product vision. 2) Play a key role across DBU Data & Analytics Power Cells to define, develop data pipelines for efficient data transport into Cummins Digital Core ( Azure DataLake, Snowflake). 3) Collaborate closely with AAI Digital Core and AAI Solutions Architecture to ensure alignment of DBU project data pipeline design standards. 4) Independently design, develop, test, implement complex data pipelines from transactional systems (ERP, CRM) to Datawarehouses, DataLake. 5) Responsible for creation, maintenence and management of DBU Data & Analytics data engineering documentation and standard operating procedures (SOP). 6) Take part in evaluation of new data tools, POCs and provide suggestions. 7) Take full ownership of the developed data pipelines, providing ongoing support for enhancements and performance optimization. 8) Proactively address and resolve issues that compromise data accuracy and usability. Preferred Skills Programming Languages: Proficiency in languages such as Python, Java, and/or Scala. Database Management: Expertise in SQL and NoSQL databases. Big Data Technologies: Experience with Hadoop, Spark, Kafka, and other big data frameworks. Cloud Services: Experience with Azure, Databricks and AWS cloud platforms. ETL Processes: Strong understanding of Extract, Transform, Load (ETL) processes. Data Replication: Working knowledge of replication technologies like Qlik Replicate is a plus API: Working knowledge of API to consume data from ERP, CRM Job Systems/Information Technology Organization Cummins Inc. Role Category Remote Job Type Exempt - Experienced ReqID 2417810 Relocation Package Yes

Posted 2 days ago

Apply

5.0 - 8.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Description GPP Database Link (https://cummins365.sharepoint.com/sites/CS38534/) Job Summary Leads projects for design, development and maintenance of a data and analytics platform. Effectively and efficiently process, store and make data available to analysts and other consumers. Works with key business stakeholders, IT experts and subject-matter experts to plan, design and deliver optimal analytics and data science solutions. Works on one or many product teams at a time. Key Responsibilities Designs and automates deployment of our distributed system for ingesting and transforming data from various types of sources (relational, event-based, unstructured). Designs and implements framework to continuously monitor and troubleshoot data quality and data integrity issues. Implements data governance processes and methods for managing metadata, access, retention to data for internal and external users. Designs and provide guidance on building reliable, efficient, scalable and quality data pipelines with monitoring and alert mechanisms that combine a variety of sources using ETL/ELT tools or scripting languages. Designs and implements physical data models to define the database structure. Optimizing database performance through efficient indexing and table relationships. Participates in optimizing, testing, and troubleshooting of data pipelines. Designs, develops and operates large scale data storage and processing solutions using different distributed and cloud based platforms for storing data (e.g. Data Lakes, Hadoop, Hbase, Cassandra, MongoDB, Accumulo, DynamoDB, others). Uses innovative and modern tools, techniques and architectures to partially or completely automate the most-common, repeatable and tedious data preparation and integration tasks in order to minimize manual and error-prone processes and improve productivity. Assists with renovating the data management infrastructure to drive automation in data integration and management. Ensures the timeliness and success of critical analytics initiatives by using agile development technologies such as DevOps, Scrum, Kanban Coaches and develops less experienced team members. Responsibilities Competencies: System Requirements Engineering - Uses appropriate methods and tools to translate stakeholder needs into verifiable requirements to which designs are developed; establishes acceptance criteria for the system of interest through analysis, allocation and negotiation; tracks the status of requirements throughout the system lifecycle; assesses the impact of changes to system requirements on project scope, schedule, and resources; creates and maintains information linkages to related artifacts. Collaborates - Building partnerships and working collaboratively with others to meet shared objectives. Communicates effectively - Developing and delivering multi-mode communications that convey a clear understanding of the unique needs of different audiences. Customer focus - Building strong customer relationships and delivering customer-centric solutions. Decision quality - Making good and timely decisions that keep the organization moving forward. Data Extraction - Performs data extract-transform-load (ETL) activities from variety of sources and transforms them for consumption by various downstream applications and users using appropriate tools and technologies. Programming - Creates, writes and tests computer code, test scripts, and build scripts using algorithmic analysis and design, industry standards and tools, version control, and build and test automation to meet business, technical, security, governance and compliance requirements. Quality Assurance Metrics - Applies the science of measurement to assess whether a solution meets its intended outcomes using the IT Operating Model (ITOM), including the SDLC standards, tools, metrics and key performance indicators, to deliver a quality product. Solution Documentation - Documents information and solution based on knowledge gained as part of product development activities; communicates to stakeholders with the goal of enabling improved productivity and effective knowledge transfer to others who were not originally part of the initial learning. Solution Validation Testing - Validates a configuration item change or solution using the Function's defined best practices, including the Systems Development Life Cycle (SDLC) standards, tools and metrics, to ensure that it works as designed and meets customer requirements. Data Quality - Identifies, understands and corrects flaws in data that supports effective information governance across operational business processes and decision making. Problem Solving - Solves problems and may mentor others on effective problem solving by using a systematic analysis process by leveraging industry standard methodologies to create problem traceability and protect the customer; determines the assignable cause; implements robust, data-based solutions; identifies the systemic root causes and ensures actions to prevent problem reoccurrence are implemented. Values differences - Recognizing the value that different perspectives and cultures bring to an organization. Education, Licenses, Certifications College, university, or equivalent degree in relevant technical discipline, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Experience Intermediate experience in a relevant discipline area is required. Knowledge of the latest technologies and trends in data engineering are highly preferred and includes: 5-8 years of experience Familiarity analyzing complex business systems, industry requirements, and/or data regulations Background in processing and managing large data sets Design and development for a Big Data platform using open source and third-party tools SPARK, Scala/Java, Map-Reduce, Hive, Hbase, and Kafka or equivalent college coursework SQL query language Clustered compute cloud-based implementation experience Experience developing applications requiring large file movement for a Cloud-based environment and other data extraction tools and methods from a variety of sources Experience in building analytical solutions Intermediate Experiences In The Following Are Preferred Experience with IoT technology Experience in Agile software development Qualifications Work closely with business Product Owner to understand product vision. 2) Play a key role across DBU Data & Analytics Power Cells to define, develop data pipelines for efficient data transport into Cummins Digital Core ( Azure DataLake, Snowflake). 3) Collaborate closely with AAI Digital Core and AAI Solutions Architecture to ensure alignment of DBU project data pipeline design standards. 4) Independently design, develop, test, implement complex data pipelines from transactional systems (ERP, CRM) to Datawarehouses, DataLake. 5) Responsible for creation, maintenence and management of DBU Data & Analytics data engineering documentation and standard operating procedures (SOP). 6) Take part in evaluation of new data tools, POCs and provide suggestions. 7) Take full ownership of the developed data pipelines, providing ongoing support for enhancements and performance optimization. 8) Proactively address and resolve issues that compromise data accuracy and usability. Preferred Skills Programming Languages: Proficiency in languages such as Python, Java, and/or Scala. Database Management: Expertise in SQL and NoSQL databases. Big Data Technologies: Experience with Hadoop, Spark, Kafka, and other big data frameworks. Cloud Services: Experience with Azure, Databricks and AWS cloud platforms. ETL Processes: Strong understanding of Extract, Transform, Load (ETL) processes. Data Replication: Working knowledge of replication technologies like Qlik Replicate is a plus API: Working knowledge of API to consume data from ERP, CRM Job Systems/Information Technology Organization Cummins Inc. Role Category Remote Job Type Exempt - Experienced ReqID 2417809 Relocation Package Yes

Posted 2 days ago

Apply

4.0 - 5.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Description GPP Database Link (https://cummins365.sharepoint.com/sites/CS38534/) Job Summary Supports, develops and maintains a data and analytics platform. Effectively and efficiently process, store and make data available to analysts and other consumers. Works with the Business and IT teams to understand the requirements to best leverage the technologies to enable agile data delivery at scale. Key Responsibilities Implements and automates deployment of our distributed system for ingesting and transforming data from various types of sources (relational, event-based, unstructured). Implements methods to continuously monitor and troubleshoot data quality and data integrity issues. Implements data governance processes and methods for managing metadata, access, retention to data for internal and external users. Develops reliable, efficient, scalable and quality data pipelines with monitoring and alert mechanisms that combine a variety of sources using ETL/ELT tools or scripting languages. Develops physical data models and implements data storage architectures as per design guidelines. Analyzes complex data elements and systems, data flow, dependencies, and relationships in order to contribute to conceptual physical and logical data models. Participates in testing and troubleshooting of data pipelines. Develops and operates large scale data storage and processing solutions using different distributed and cloud based platforms for storing data (e.g. Data Lakes, Hadoop, Hbase, Cassandra, MongoDB, Accumulo, DynamoDB, others). Uses agile development technologies, such as DevOps, Scrum, Kanban and continuous improvement cycle, for data driven application. Responsibilities Competencies: System Requirements Engineering - Uses appropriate methods and tools to translate stakeholder needs into verifiable requirements to which designs are developed; establishes acceptance criteria for the system of interest through analysis, allocation and negotiation; tracks the status of requirements throughout the system lifecycle; assesses the impact of changes to system requirements on project scope, schedule, and resources; creates and maintains information linkages to related artifacts. Collaborates - Building partnerships and working collaboratively with others to meet shared objectives. Communicates effectively - Developing and delivering multi-mode communications that convey a clear understanding of the unique needs of different audiences. Customer focus - Building strong customer relationships and delivering customer-centric solutions. Decision quality - Making good and timely decisions that keep the organization moving forward. Data Extraction - Performs data extract-transform-load (ETL) activities from variety of sources and transforms them for consumption by various downstream applications and users using appropriate tools and technologies. Programming - Creates, writes and tests computer code, test scripts, and build scripts using algorithmic analysis and design, industry standards and tools, version control, and build and test automation to meet business, technical, security, governance and compliance requirements. Quality Assurance Metrics - Applies the science of measurement to assess whether a solution meets its intended outcomes using the IT Operating Model (ITOM), including the SDLC standards, tools, metrics and key performance indicators, to deliver a quality product. Solution Documentation - Documents information and solution based on knowledge gained as part of product development activities; communicates to stakeholders with the goal of enabling improved productivity and effective knowledge transfer to others who were not originally part of the initial learning. Solution Validation Testing - Validates a configuration item change or solution using the Function's defined best practices, including the Systems Development Life Cycle (SDLC) standards, tools and metrics, to ensure that it works as designed and meets customer requirements. Data Quality - Identifies, understands and corrects flaws in data that supports effective information governance across operational business processes and decision making. Problem Solving - Solves problems and may mentor others on effective problem solving by using a systematic analysis process by leveraging industry standard methodologies to create problem traceability and protect the customer; determines the assignable cause; implements robust, data-based solutions; identifies the systemic root causes and ensures actions to prevent problem reoccurrence are implemented. Values differences - Recognizing the value that different perspectives and cultures bring to an organization. Education, Licenses, Certifications College, university, or equivalent degree in relevant technical discipline, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Experience 4-5 Years of experience. Relevant experience preferred such as working in a temporary student employment, intern, co-op, or other extracurricular team activities. Knowledge of the latest technologies in data engineering is highly preferred and includes: Exposure to Big Data open source SPARK, Scala/Java, Map-Reduce, Hive, Hbase, and Kafka or equivalent college coursework SQL query language Clustered compute cloud-based implementation experience Familiarity developing applications requiring large file movement for a Cloud-based environment Exposure to Agile software development Exposure to building analytical solutions Exposure to IoT technology Qualifications Work closely with business Product Owner to understand product vision. 2) Participate in DBU Data & Analytics Power Cells to define, develop data pipelines for efficient data transport into Cummins Digital Core ( Azure DataLake, Snowflake). 3) Collaborate closely with AAI Digital Core and AAI Solutions Architecture to ensure alignment of DBU project data pipeline design standards. 4) Work under limited supervision to design, develop, test, implement complex data pipelines from transactional systems (ERP, CRM) to Datawarehouses, DataLake. 5) Responsible for creation of DBU Data & Analytics data engineering documentation and standard operating procedures (SOP) with guidance and help from senior data engineers. 6) Take part in evaluation of new data tools, POCs with guidance and help from senior data engineers. 7) Take ownership of the developed data pipelines, providing ongoing support for enhancements and performance optimization under limited supervision. 8) Assist to resolve issues that compromise data accuracy and usability. Programming Languages: Proficiency in languages such as Python, Java, and/or Scala. Database Management: Intermediate level expertise in SQL and NoSQL databases. Big Data Technologies: Experience with Hadoop, Spark, Kafka, and other big data frameworks. Cloud Services: Experience with Azure, Databricks and AWS cloud platforms. ETL Processes: Strong understanding of Extract, Transform, Load (ETL) processes. API: Working knowledge of API to consume data from ERP, CRM Job Systems/Information Technology Organization Cummins Inc. Role Category Remote Job Type Exempt - Experienced ReqID 2417808 Relocation Package Yes

Posted 2 days ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Apache Spark Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with team members to understand project needs, developing application features, and ensuring that the applications function seamlessly within the business environment. You will also engage in testing and troubleshooting to enhance application performance and user experience, while continuously seeking ways to improve processes and solutions. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Assist in the documentation of application processes and workflows. - Engage in code reviews to ensure quality and adherence to best practices. Professional & Technical Skills: - Must To Have Skills: Proficiency in Apache Spark. - Strong understanding of distributed computing principles. - Experience with data processing frameworks and tools. - Familiarity with programming languages such as Java or Scala. - Knowledge of cloud platforms and services for application deployment. Additional Information: - The candidate should have minimum 3 years of experience in Apache Spark. - This position is based at our Noida office. - A 15 years full time education is required., 15 years full time education

Posted 2 days ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

APN Consulting, Inc. is a progressive IT staffing and services company offering innovative business solutions to improve client business outcomes. We focus on high impact technology solutions in ServiceNow, Fullstack, Cloud & Data, and AI / ML. Due to our globally expanding service offerings we are seeking top-talent to join our teams and grow with us. Role: Fullstack Developer Location: Noida Work Mode: Hybrid Work hours: 2-11 pm India hours As a Senior Software Engineer on the Enterprise Development Services team, you will play a key role in designing and developing solutions, patterns, and standards to be adopted across engineering teams. You'll serve as a standard bearer for development practices, design quality, and technical culture, contributing through reusable components, best practices, and direct mentorship (e.g., pair programming, tutorials, internal presentations). You'll also provide regular progress updates to your manager and support team-wide alignment to architectural goals. Role And Responsibilities Build and maintain enterprise-grade backend services using Java microservices and front-end applications using React JS Develop reusable components, frameworks, and libraries for adoption across product teams Work with Jenkins and other CI/CD tools to automate build, deployment, and testing pipelines Collaborate with engineering teams to ensure adherence to best practices and coding standards Provide technical support for the adoption of shared services and components Participate in the evolution of company-wide standards and software development policies Adapt to shifting priorities in a dynamic environment Debug complex issues involving APIs, performance, and systems integration Support technical enablement and knowledge sharing across the organization Mandatory Skills 4–5 years of relevant experience in software development with a focus on full-stack and cloud-native technologies (Azure or AWS) Strong backend development skills using Java microservices Experience with front-end development using React JS Experience with Docker and Kubernetes (EKS or AKS) Experience with CI/CD tools such as Jenkins and Terraform (or similar) Familiarity with debugging common web issues (HTTP, XHR, JSON, CORS, SSL, S3, etc.) Proven ability to investigate performance and memory issues Strong understanding of API design and ability to reduce complex requirements into scalable architecture Knowledge of messaging patterns and tools such as Kafka or RabbitMQ Applies software engineering best practices, including design patterns and linting Strong communication and collaboration skills in cross-functional teams Demonstrated ability to deliver in fast-paced, changing environments Preferred Skills Familiarity with Groovy programming language Experience with Scala or Ruby on Rails programming language Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent experience) We are committed to fostering a diverse, inclusive, and equitable workplace where individuals from all backgrounds feel valued and empowered to contribute their unique perspectives. We strongly encourage applications from candidates of all genders, races, ethnicities, abilities, and experiences to join our team and help us build a culture of belonging.

Posted 2 days ago

Apply

14.0 - 18.0 years

0 Lacs

Gurugram, Haryana, India

On-site

About the Role : The role involves creating innovative solutions, guiding development teams, ensuring technical excellence, and driving architectural decisions aligned with company policies. The Solution Designer/Tech Lead will be a key technical advisor, collaborating with onshore teams and leadership to deliver high-impact Data and AI/ML projects. Responsibilities : Design and architect Generative AI solutions leveraging AWS services such as Bedrock, S3, PG Vector, Kendra, and SageMaker. Collaborate closely with developers to implement solutions, providing technical guidance and support throughout the development lifecycle. Lead the resolution of complex technical issues and challenges in AI/ML projects. Conduct thorough solution reviews and ensure adherence to best practices and company standards. Navigate governance processes and obtain necessary approvals for initiatives. Make critical architectural and design decisions aligned with organizational policies and industry best practices. Liaise with onshore technical teams, presenting solutions and providing expert analysis on proposed approaches. Conduct technical sessions and knowledge-sharing workshops on AI/ML technologies and AWS services. Evaluate and integrate emerging technologies and frameworks like LangChain into solution designs. Develop and maintain technical documentation, including architecture diagrams and design specifications. Mentor junior team members and foster a culture of innovation and continuous learning. Collaborate with data scientists and analysts to ensure optimal use of data in AI/ML solutions. Coordinate with clients, data users, and key stakeholders to achieve long-term objectives for data architecture. Stay updated on the latest trends and advancements in AI/ML and cloud and data technologies. Key Qualifications and experience: Extensive experience (14-18 years) in software development and architecture, with a focus on AI/ML solutions. Deep understanding of AWS services, particularly those related to AI/ML (Bedrock, SageMaker, Kendra, etc.). Proven track record in designing and implementing data, analytics, repor ting and/or AI/ML solutions. Strong knowledge of data structures, algorithms, and software design patterns. Expertise in data management, analytics, and reporting tools. Proficiency in at least one programming language commonly used in AI/ML (e.g., Python, Java, Scala). Familiarity with DevOps practices and CI/CD pipelines. Understanding of AI ethics, bias mitigation, and responsible AI principles. Basic understanding of data pipelines and ETL processes, with the ability to design and implement efficient data flows for AI/ML models. Experience in working with diverse data types (structured, unstructured, and semi-structured) and ability to preprocess and transform data for use in generative AI applications.

Posted 2 days ago

Apply

9.0 years

0 Lacs

India

On-site

What You'll Do Avalara is an AI-first company. We expect every engineer, manager, and leader to actively leverage AI to enhance productivity, quality, innovation, and customer value. AI is embedded in our workflows, decision-making, and products — and success at Avalara requires embracing AI as an essential capability, not an optional tool. We are seeking an experienced and experienced AI & Machine Learning Technical Manager to lead our dynamic team in developing cutting-edge AI & ML solutions. This role is perfect for someone passionate about applying AI and ML, promote innovation, and create impactful products. You will be responsible for our AI systems (conversational agents, tax code classification for products and services, document intelligence, etc.) and how we apply them at Avalara to simplify and scale tax compliance across our entire portfolio of products. As an important part of our leadership team, you will shape the future of our AI & ML projects, manage a talented team of AI professionals, and collaborate with teams to implement projects. We offer the chance to work on pioneering AI technologies and mentor a team of experts, and contribute to the strategic direction of our AI & ML endeavors. This role will report to Sr Director of AI & ML. What Your Responsibilities Will Be Lead and manage a team of AI&ML engineers and data scientists, overseeing project lifecycles from conception to deployment, ensuring timely delivery. Develop team members, providing guidance on technical challenges, career development, and professional growth opportunities. Stay up to date with the latest AI&ML technologies and methodologies, incorporating new approaches into our projects to maintain competitive advantage. Develop our AI&ML strategy, aligning with our objectives, and ensuring the team's projects support this vision. Collaborate with teams, including product management, engineering, and design, to define project requirements, set priorities, and allocate resources effectively. Foster a culture of innovation, encouraging experimentation and learning, and leading by example in adopting a hands-on approach to problem-solving. Ensure implementing best practices in project management, software development, and quality assurance to optimize team performance and productivity. Manage stakeholder communications, providing regular updates on project status, important milestones, and any challenges or risks, ensuring alignment and support across the organization. What You’ll Need To Be Successful What You'll Need to be Successful Specific Qualifications Expertise in AI technologies and methodologies, with a portfolio of projects demonstrating your ability to apply these in solving complex problems. Experience building and deploying to production APIs powered by AI & Machine Learning systems. Proficiency in programming languages relevant to AI & ML, such as Python, R, Java, Scala, C++ and familiarity with AI & ML frameworks and libraries (e.g., PyTorch, TensorFlow, and Scikit-learn). Experience with cloud computing platforms (AWS, Azure, Google Cloud) and understanding of how to use these for scalable, secure, reliable distributed systems with complex workflows relying on AI & ML solutions. Background in data engineering and familiarity with database technologies, as well as, data processing / ETL pipelines and visualization tools. : General Qualifications Bachelor's degree in Computer Science, Artificial Intelligence, or Machine Learning. 9 years of experience in AI & ML with at least 6 years in a management position, overseeing technical teams. Translate complex technical concepts and challenges into clear strategic plans and applicable solutions. People management skills, with experience mentoring and developing teams. Excellent project management skills, with experience in agile methodologies How We’ll Take Care Of You Total Rewards In addition to a great compensation package, paid time off, and paid parental leave, many Avalara employees are eligible for bonuses. Health & Wellness Benefits vary by location but generally include private medical, life, and disability insurance. Inclusive culture and diversity Avalara strongly supports diversity, equity, and inclusion, and is committed to integrating them into our business practices and our organizational culture. We also have a total of 8 employee-run resource groups, each with senior leadership and exec sponsorship. What You Need To Know About Avalara We’re defining the relationship between tax and tech. We’ve already built an industry-leading cloud compliance platform, processing over 54 billion customer API calls and over 6.6 million tax returns a year. Our growth is real - we're a billion dollar business - and we’re not slowing down until we’ve achieved our mission - to be part of every transaction in the world. We’re bright, innovative, and disruptive, like the orange we love to wear. It captures our quirky spirit and optimistic mindset. It shows off the culture we’ve designed, that empowers our people to win. We’ve been different from day one. Join us, and your career will be too. We’re An Equal Opportunity Employer Supporting diversity and inclusion is a cornerstone of our company — we don’t want people to fit into our culture, but to enrich it. All qualified candidates will receive consideration for employment without regard to race, color, creed, religion, age, gender, national orientation, disability, sexual orientation, US Veteran status, or any other factor protected by law. If you require any reasonable adjustments during the recruitment process, please let us know.

Posted 2 days ago

Apply

7.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

7+ years of experience in data engineering or equivalent technical role. 5+ years of hands-on experience with AWS Cloud Development and DevOps. Strong expertise in SQL, data modeling, and ETL/ELT pipelines. Deep experience with Oracle (PL/SQL, performance tuning, data extraction). Proficiency in Python and/or Scala for data processing tasks. Strong knowledge of cloud infrastructure (networking, security, cost optimization). Experience with infrastructure as code (Terraform). Familiarity with CI/CD pipelines and DevOps tooling (e.g., Jenkins, GitHub Actions).

Posted 2 days ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

At Seismic, we're proud of our engineering culture where technical excellence and innovation drive everything we do. We're a remote-first data engineering team responsible for the critical data pipeline that powers insights for over 2,300 customers worldwide. Our team manages all data ingestion processes, leveraging technologies like Apache Kafka, Spark, various C# microservices services, and a shift-left data mesh architecture to transform diverse data streams into the valuable reporting models that our customers rely on daily to make data-driven decisions. Additionally, we're evolving our analytics platform to include AI-powered agentic workflows. Who You Are Have working knowledge of one OO language, preferably C#, but won’t hold your Java expertise against you (you’re the type of person who’s interested in learning and becoming an expert at new things). Additionally, we’ve been using Python more and more, and bonus points if you’re familiar with Scala. Have experience with architecturally complex distributed systems. Highly focused on operational excellence and quality – you have a passion to write clean and well tested code and believe in the testing pyramid. Outstanding verbal and written communication skills with the ability to work with others at all levels, effective at working with geographically remote and culturally diverse teams. You enjoy solving challenging problems, all while having a blast with equally passionate team members. Conversant in AI engineering. You’ve been experimenting with building ai solutions/integrations using LLMs, prompts, Copilots, Agentic ReAct workflows, etc. At Seismic, we’re committed to providing benefits and perks for the whole self. To explore our benefits available in each country, please visit the Global Benefits page. Please be aware we have noticed an increase in hiring scams potentially targeting Seismic candidates. Read our full statement on our Careers page. Seismic is the global leader in AI-powered enablement, empowering go-to-market leaders to drive strategic growth and deliver exceptional customer experiences at scale. The Seismic Enablement Cloud™ is the only unified AI-powered platform that prepares customer-facing teams with the skills, content, tools, and insights needed to maximize every buyer interaction and strengthen client relationships. Trusted by more than 2,000 organizations worldwide, Seismic helps businesses achieve measurable outcomes and accelerate revenue growth. Seismic is headquartered in San Diego with offices across North America, Europe, Asia and Australia. Learn more at seismic.com. Seismic is committed to building an inclusive workplace that ignites growth for our employees and creates a culture of belonging that allows all employees to be seen and valued for who they are. Learn more about DEI at Seismic here. Collaborating with experienced software engineers, data scientists and product managers to rapidly build, test, and deploy code to create innovative solutions and add value to our customers' experience. Building large scale platform infrastructure and REST APIs serving machine learning driven content recommendations to Seismic products. Leveraging the power of context in third-party applications such as CRMs to drive machine learning algorithms and models. Helping build next-gen Agentic tooling for reporting and insights Processing large amounts of internal and external system data for analytics, caching, modeling and more. Identifying performance bottlenecks and implementing solutions for them. Participating in code reviews, system design reviews, agile ceremonies, bug triage and on-call rotations. BS or MS in Computer Science, similar technical field of study, or equivalent practical experience. 3+ years of software development experience within a SaaS business. Must have a familiarity with .NET Core, and C# and frameworks. Experience in data engineering - building and managing Data Pipelines, ETL processes, and familiarity with various technologies that drive them: Kafka, FiveTran (Optional), Spark/Scala (Optional), etc. Data warehouse experience with Snowflake, or similar (AWS Redshift, Apache Iceberg, Clickhouse, etc). Familiarity with RESTFul microservice-based APIs Experience in modern CI/CD pipelines and infrastructure (Jenkins, Github Actions, Terraform, Kubernetes) a big plu (or equivalent) Experience with the SCRUM and the AGILE development process. Familiarity developing in cloud-based environments Optional: Experience with 3rd party integrations Optional: familiarity with Meeting systems like Zoom, WebEx, MS Teams Optional: familiarity with CRM systems like Salesforce, Microsoft Dynamics 365, Hubspot. If you are an individual with a disability and would like to request a reasonable accommodation as part of the application or recruiting process, please click here. Headquartered in San Diego and with employees across the globe, Seismic is the global leader in sales enablement , backed by firms such as Permira, Ameriprise Financial, EDBI, Lightspeed Venture Partners, and T. Rowe Price. Seismic also expanded its team and product portfolio with the strategic acquisitions of SAVO, Percolate, Grapevine6, and Lessonly. Our board of directors is composed of several industry luminaries including John Thompson, former Chairman of the Board for Microsoft. Seismic is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to gender, age, race, religion, or any other classification which is protected by applicable law. Please note this job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities and activities may change at any time with or without notice.

Posted 2 days ago

Apply

7.0 years

25 - 35 Lacs

Chennai, Tamil Nadu, India

On-site

Company: Accelon Website: Visit Website Business Type: Small/Medium Business Company Type: Product & Service Business Model: B2B Funding Stage: Pre-seed Industry: FinTech Salary Range: ₹ 25-35 Lacs PA Job Description This is a permanent role with a product based global fintech company - A Valued client Accelon inc. Required Skills Java, Spring Boot and REST Oracle DB Good knowledge of data structures and algorithm concepts At least 7 years of experience in software product development. Bachelor/ Master degree in Computer Science, Engineering, closely related quantitative discipline. Expertise in online payments and related domains is a plus. Requirements Strong skills in Java, Scala, Spark & Raptor and OO-based design and development. Strong skills in Spring Boot, Hibernate, REST, Maven, GitHub, and other open-source Java libraries. Excellent problem-solving abilities and strong understanding of software development/ delivery lifecycle. Proven track record working with real-world projects and delivering complex software projects from concept to production, with a focus on scalability, reliability, and performance. Good knowledge of data structures and algorithm concepts, as well as database design, tuning and query optimization. Strong debugging and problem resolution skills and focus on automation, and test-driven development. Ability to work in a fast paced, iterative development environment. Hands on development experience using JAVA, Spring Core and Spring Batch. Deep understanding of and extensive experience applying advanced object-oriented design and development principles. Experience developing data-driven applications using an industry standard RDBMS (Oracle, etc.), including strong data architecture and SQL development skills. Knowledge on data modelling skills with relational databases, elastic search (Kibana), Hadoop. Experience with REST API’s, Web Services, JMS, Unit Testing and build tools. Responsibilities Team member will be expected to adhere to SDLC process and interact with the team on a daily basis. Develops efficient, elegant, clean, reusable code with no unnecessary complication or abstraction. Manages workload and other assignments efficiently while being able to resolve time-critical situations reliably and professionally. Work with various PD teams on integration and post-integration (live) issues. Engage in the automation of daily activities that drive operational excellence and ensure highly productive operating procedures. Weekend and after-hours support are required for BCDC products and applications on the live site, on a rotating schedule.

Posted 2 days ago

Apply

4.0 - 6.0 years

0 Lacs

Bengaluru East, Karnataka, India

On-site

Company Description Nielsen Global Media uses cutting edge technology and industry leading data science to tackle some of the hardest problems in marketing science. We’re automating our models with artificial intelligence and machine learning to produce the same quality insights as a traditional white-glove consulting engagement at unparalleled speed and scale. Job Description About the Role:- The team this role supports is responsible for the critical function of managing lineups and metadata across various media channels such as cable, broadcast and video on demand etc. that encompasses a wide scope dealing with data from both local and national providers. This role requires flexibility to provide technical support across different time zones, including both IST and US business hours on a rotational basis. The Support Engineer will serve as the primary point of contact for customer and stakeholder inquiries, responsible for troubleshooting issues, following Standard Operating Procedures (SOPs) and escalating to the development team when necessary. This role requires close collaboration with cross-functional teams to ensure timely and effective issue resolution, driving operational stability and enhancing customer satisfaction. In this role, you will debug and attempt to resolve issues independently using SOPs. If unable to resolve an issue, you will escalate it to the next level of support, involving the development team as needed. Your goal will be to ensure efficient handling of support requests and to continuously improve SOPs for recurring issues. Responsibilities:- Serve as the first point of contact for customer or stakeholder issues, providing prompt support during the US/IST time zone on a rotational basis. Execute SOPs to troubleshoot and resolve recurring issues and ensuring adherence to documented procedures. Provide technical support and troubleshooting for cloud-based infrastructure and services, including compute, storage, networking and security components. Collaborate with application, security and other internal teams to resolve complex issues related to cloud-based services and infrastructure. Escalate unresolved issues to the development team and provide clear documentation of troubleshooting steps taken. Document and maintain up-to-date SOPs, troubleshooting guides, and technical support documentation. Collaborate with cross-functional teams to ensure issues are tracked, escalated, and resolved efficiently. Proactively identify and suggest process improvements to enhance support quality and response times. Qualifications Key Skills: Bachelor's or Master’s degree in Computer Science, Software Engineering, or a related field. Experience Range- 4 to 6 years. Must have skills:** Proficiency in Java programming language. Excellent SQL skills for querying and analyzing data from various database systems. Good understanding of database concepts and technologies. Good problem-solving skills and ability to work independently. Good proficiency in AWS cloud platform and its core services. Good written and verbal communication skills with a strong emphasis on technical documentation. Ability to follow and create detailed SOPs for various support tasks. Good to have skills:** Knowledge of Scala/Python for scripting and automation. Familiarity with big data technologies such as Spark and Hive. Additional Information Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels.

Posted 2 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Department: Information Technology Location: APAC-India-IT Delivery Center Hyderabad Description Essential Duties and Responsibilities: Develop and maintain data pipelines using Azure native services like ADLS Gen 2, Azure Data Factory, Synapse, Spark, Python, Databricks and AWS Cloud services, Databurst Develop Datasets require for Business Analytics in Power BI and Azure Data Warehouse. Ensure software development principles, standards, and best practices are followed Maintain existing applications and provide operational support. Review and analyze user requirement and write system specifications Ensure quality design, delivery, and adherence to corporate standards. Participate in daily stand-ups, reviews, design sessions and architectural discussion. Other duties may be assigned Role expectations Essential Duties And Responsibilities Develop and maintain data pipelines using Azure native services like ADLS Gen 2, Azure Data Factory, Synapse, Spark, Python, Databricks and AWS Cloud services, Databurst Develop Datasets require for Business Analytics in Power BI and Azure Data Warehouse. Ensure software development principles, standards, and best practices are followed Maintain existing applications and provide operational support. Review and analyze user requirement and write system specifications Ensure quality design, delivery, and adherence to corporate standards. Participate in daily stand-ups, reviews, design sessions and architectural discussion. Other duties may be assigned What We're Looking For Required Qualifications and Skills: 5+yrs Experience in solution delivery for Data Analytics to get insights for various departments in Organization. 5+yrs Experience in delivering solutions using Microsoft Azure Platform or AWS Services with emphasis on data solutions and services. Extensive knowledge on writing SQL queries and experience in performance tuning queries Experience developing software architectures and key software components Proficient in one or more of the following programming languages: C#, Java, Python, Scala, and related open-source frameworks. Understanding of data services including Azure SQL Database, Data Lake, Databricks, Data Factory, Synapse Data modeling experience on Azure DW/ AWS , understanding of dimensional model , star schemas, data vaults Quick learner who is passionate about new technologies. Strong sense of ownership, customer obsession, and drive with a can-do attitude. Team player with great communication skills--listening, speaking, reading, and writing--in English BS in Computer Science, Computer Engineering, or other quantitative fields such as Statistics, Mathematics, Physics, or Engineering. Applicant Privacy Policy Review our Applicant Privacy Policy for additional information. Equal Opportunity Statement Align Technology is an equal opportunity employer. We are committed to providing equal employment opportunities in all our practices, without regard to race, color, religion, sex, national origin, ancestry, marital status, protected veteran status, age, disability, sexual orientation, gender identity or expression, or any other legally protected category. Applicants must be legally authorized to work in the country for which they are applying, and employment eligibility will be verified as a condition of hire.

Posted 2 days ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Responsibilities JOB DESCRIPTION Develop quality focused test strategies that help the team deliver software that provides maximum quality without sacrificing business value. Test early and often. Work with multiple teams of engineers and product managers to gather requirements and data sources to design and build test plans including data quality validations. Participate in projects developed with agile methodology. Design and document test cases and test data to ensure proper coverage. Perform exploratory/manual tests as needed. Collaborate with software engineers to triage issues and work to ensure the validity, timeliness, consistency, completeness and accuracy of our data across all data platform components. Write, execute, and monitor automated test suites for integration and regression testing. Integrate tests as part of continuous delivery pipelines. Define quality metrics and build quality monitoring solutions and dashboards. Nurture a culture of quality through collaboration with teammates across the engineering function to make sure quality is embedded in both processes and technology. Mentor/coach team members to ensure appropriate testing coverage within the team with a focus on continuous testing and a shift-left approach. A Suitable Candidate Would Have Minimum 3 years of testing experience working with applications developed in languages like, node.js, Python, Golang, Java. Solid experience in writing clear, concise, and comprehensive test plans and test cases. Experience in building automated test suites for API's REST and/or gRPC with focus on data validation. Experience in a programming language like, Python, Java, Golang, using it for automation API testing and web UI testing. Experience with UI frameworks like Selenium, Webdriver IO, cucumber, pytest . Experience with test case management tools like Testrail, and API testing tools like Postman Knowledge of data quality tools like Great Expectations, Deequ, etc. is desirable. Must understand databases and ORMs, experienced with at least one RDBMS and DB Query language. Highly experienced writing queries for data validation across different data sources and during the processing pipeline. Experience on modern Quality Engineering principles such as Continuous Testing and Shift Left. Good understanding of service oriented and microservices architecture. Experience with cloud environments like AWS, GCP, source control tools like Github and continuous integration and delivery software. Attitude to work in a fast-paced environment which values agility over talk. Great communication skills Experience testing fulfillment systems is a plus. Skill Set Scala, GO, Python, Java, Testrail, Pytest, postman, test automation and continue delivery frameworks, Great Expectation framework, AWS, Github, Gitlab, Selenium, Node.JS, react.

Posted 2 days ago

Apply

0.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Risk Management Level Associate Job Description & Summary A career within Internal Audit services, will provide you with an opportunity to gain an understanding of an organisation’s objectives, regulatory and risk management environment, and the diverse needs of their critical stakeholders. We focus on helping organisations look deeper and see further considering areas like culture and behaviours to help improve and embed controls. In short, we seek to address the right risks and ultimately add value to their organisation. At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true saelves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within…. Responsibilities: Architecture Design: · Design and implement scalable, secure, and high-performance architectures for Generative AI applications. · Integrate Generative AI models into existing platforms, ensuring compatibility and performance optimization. Model Development and Deployment: · Fine-tune pre-trained generative models for domain-specific use cases. · Data Collection, Sanitization and Data Preparation strategy for Model fine tuning. · Well versed with machine learning algorithms like Supervised, unsupervised and Reinforcement learnings, Deep learning. · Well versed with ML models like Linear regression, Decision trees, Gradient boosting, Random Forest and K-means etc. · Evaluate, select, and deploy appropriate Generative AI frameworks (e.g., PyTorch, TensorFlow, Crew AI, Autogen, Langraph, Agentic code, Agent flow). Innovation and Strategy: · Stay up to date with the latest advancements in Generative AI and recommend innovative applications to solve complex business problems. · Define and execute the AI strategy roadmap, identifying key opportunities for AI transformation. · Good exposure to Agentic Design patterns Collaboration and Leadership: · Collaborate with cross-functional teams, including data scientists, engineers, and business stakeholders. · Mentor and guide team members on AI/ML best practices and architectural decisions. · Should be able to lead a team of data scientists, GenAI engineers and Software Developers. Performance Optimization: · Monitor the performance of deployed AI models and systems, ensuring robustness and accuracy. · Optimize computational costs and infrastructure utilization for large-scale deployments. Ethical and Responsible AI: · Ensure compliance with ethical AI practices, data privacy regulations, and governance frameworks. · Implement safeguards to mitigate bias, misuse, and unintended consequences of Generative AI. Mandatory skill sets: · Advanced programming skills in Python and fluency in data processing frameworks like Apache Spark. · Experience with machine learning, artificial Intelligence frameworks models and libraries (TensorFlow, PyTorch, Scikit-learn, etc.). · Should have strong knowledge on LLM’s foundational model (OpenAI GPT4o, O1, Claude, Gemini etc), while need to have strong knowledge on opensource Model’s like Llama 3.2, Phi etc. · Proven track record with event-driven architectures and real-time data processing systems. · Familiarity with Azure DevOps and other LLMOps tools for operationalizing AI workflows. · Deep experience with Azure OpenAI Service and vector DBs, including API integrations, prompt engineering, and model fine-tuning. Or equivalent tech in AWS/GCP. · Knowledge of containerization technologies such as Kubernetes and Docker. · Comprehensive understanding of data lakes and strategies for data management. · Expertise in LLM frameworks including Langchain, Llama Index, and Semantic Kernel. · Proficiency in cloud computing platforms such as Azure or AWS. · Exceptional leadership, problem-solving, and analytical abilities. · Superior communication and collaboration skills, with experience managing high-performing teams. · Ability to operate effectively in a dynamic, fast-paced environment. Preferred skill sets: · Experience with additional technologies such as Datadog, and Splunk. · Programming languages like C#, R, Scala · Possession of relevant solution architecture certificates and continuous professional development in data engineering and Gen AI. Years of experience required: 0-1 Years Education qualification: · BE / B.Tech / MCA / M.Sc / M.E / M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor in Business Administration, Master of Business Administration, Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Java Optional Skills Accepting Feedback, Accepting Feedback, Accounting and Financial Reporting Standards, Active Listening, Artificial Intelligence (AI) Platform, Auditing, Auditing Methodologies, Business Process Improvement, Communication, Compliance Auditing, Corporate Governance, Data Analysis and Interpretation, Data Ingestion, Data Modeling, Data Quality, Data Security, Data Transformation, Data Visualization, Emotional Regulation, Empathy, Financial Accounting, Financial Audit, Financial Reporting, Financial Statement Analysis, Generally Accepted Accounting Principles (GAAP) {+ 19 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 2 days ago

Apply

0.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Risk Management Level Associate Job Description & Summary A career within Internal Audit services, will provide you with an opportunity to gain an understanding of an organisation’s objectives, regulatory and risk management environment, and the diverse needs of their critical stakeholders. We focus on helping organisations look deeper and see further considering areas like culture and behaviours to help improve and embed controls. In short, we seek to address the right risks and ultimately add value to their organisation. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true saelves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within…. Responsibilities: Architecture Design: · Design and implement scalable, secure, and high-performance architectures for Generative AI applications. · Integrate Generative AI models into existing platforms, ensuring compatibility and performance optimization. Model Development and Deployment: · Fine-tune pre-trained generative models for domain-specific use cases. · Data Collection, Sanitization and Data Preparation strategy for Model fine tuning. · Well versed with machine learning algorithms like Supervised, unsupervised and Reinforcement learnings, Deep learning. · Well versed with ML models like Linear regression, Decision trees, Gradient boosting, Random Forest and K-means etc. · Evaluate, select, and deploy appropriate Generative AI frameworks (e.g., PyTorch, TensorFlow, Crew AI, Autogen, Langraph, Agentic code, Agent flow). Innovation and Strategy: · Stay up to date with the latest advancements in Generative AI and recommend innovative applications to solve complex business problems. · Define and execute the AI strategy roadmap, identifying key opportunities for AI transformation. · Good exposure to Agentic Design patterns Collaboration and Leadership: · Collaborate with cross-functional teams, including data scientists, engineers, and business stakeholders. · Mentor and guide team members on AI/ML best practices and architectural decisions. · Should be able to lead a team of data scientists, GenAI engineers and Software Developers. Performance Optimization: · Monitor the performance of deployed AI models and systems, ensuring robustness and accuracy. · Optimize computational costs and infrastructure utilization for large-scale deployments. Ethical and Responsible AI: · Ensure compliance with ethical AI practices, data privacy regulations, and governance frameworks. · Implement safeguards to mitigate bias, misuse, and unintended consequences of Generative AI. Mandatory skill sets: · Advanced programming skills in Python and fluency in data processing frameworks like Apache Spark. · Experience with machine learning, artificial Intelligence frameworks models and libraries (TensorFlow, PyTorch, Scikit-learn, etc.). · Should have strong knowledge on LLM’s foundational model (OpenAI GPT4o, O1, Claude, Gemini etc), while need to have strong knowledge on opensource Model’s like Llama 3.2, Phi etc. · Proven track record with event-driven architectures and real-time data processing systems. · Familiarity with Azure DevOps and other LLMOps tools for operationalizing AI workflows. · Deep experience with Azure OpenAI Service and vector DBs, including API integrations, prompt engineering, and model fine-tuning. Or equivalent tech in AWS/GCP. · Knowledge of containerization technologies such as Kubernetes and Docker. · Comprehensive understanding of data lakes and strategies for data management. · Expertise in LLM frameworks including Langchain, Llama Index, and Semantic Kernel. · Proficiency in cloud computing platforms such as Azure or AWS. · Exceptional leadership, problem-solving, and analytical abilities. · Superior communication and collaboration skills, with experience managing high-performing teams. · Ability to operate effectively in a dynamic, fast-paced environment. Preferred skill sets: · Experience with additional technologies such as Datadog, and Splunk. · Programming languages like C#, R, Scala · Possession of relevant solution architecture certificates and continuous professional development in data engineering and Gen AI. Years of experience required: 0-1 Years Education qualification: · BE / B.Tech / MCA / M.Sc / M.E / M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor in Business Administration, Master of Business Administration, Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Java Optional Skills Accepting Feedback, Accepting Feedback, Accounting and Financial Reporting Standards, Active Listening, Artificial Intelligence (AI) Platform, Auditing, Auditing Methodologies, Business Process Improvement, Communication, Compliance Auditing, Corporate Governance, Data Analysis and Interpretation, Data Ingestion, Data Modeling, Data Quality, Data Security, Data Transformation, Data Visualization, Emotional Regulation, Empathy, Financial Accounting, Financial Audit, Financial Reporting, Financial Statement Analysis, Generally Accepted Accounting Principles (GAAP) {+ 19 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 2 days ago

Apply

7.0 years

5 - 10 Lacs

Hyderābād

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Design, develop, and maintain scalable, robust, and secure backend services using Scala and Java Architect and implement microservices using the Play Framework Deploy and manage applications in Kubernetes on AWS Integrate backend services with PostgreSQL databases and data processing systems Utilize Datadog for monitoring, logging, and performance optimization Work with AWS services, including Elastic Beanstalk, for deployment and management of applications Use GitHub for version control and collaboration Lead and participate in the complete software development life cycle (SDLC), including planning, development, testing, and deployment Troubleshoot, debug, and upgrade existing software Document the backend process to aid in future upgrades and maintenance Perform code reviews and mentor junior developers Collaborate with cross-functional teams to define, design, and ship new features Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or a related field 7+ years of experience in software development using Scala and Java Technical Skills: Experience with Datadog for monitoring, logging, and performance optimization Experience with PostgreSQL databases Experience with Agile methodologies (Scrum, Test Driven Development, Continuous Integration) Solid proficiency in Scala and Java programming languages Extensive experience with the Play Framework for building microservices Proficiency in deploying and managing applications in Kubernetes on AWS Proficiency in AWS services, including Elastic Beanstalk Familiarity with version control systems, particularly GitHub Solid understanding of data structures and algorithms Soft Skills: Excellent problem-solving and analytical skills Solid communication and collaboration skills Ability to work independently and as part of a team Leadership skills and experience mentoring junior developers At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 2 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies