Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
You strive to be an essential member of a diverse team of visionaries dedicated to making a lasting impact. Don't pass up this opportunity to collaborate with some of the brightest minds in the field and deliver best-in-class solutions to the industry. As a Senior Lead Data Architect at JPMorgan Chase within the Consumer and Community Banking Data Technology, you are an integral part of a team that works to develop high-quality data architecture solutions for various software applications, platform, and data products. Drive significant business impact and help shape the global target state architecture through your capabilities in multiple data architecture domains. Represents the data architecture team at technical governance bodies and provides feedback regarding proposed improvements regarding data architecture governance practices. Evaluates new and current technologies using existing data architecture standards and frameworks. Regularly provides technical guidance and direction to support the business and its technical teams, contractors, and vendors. Design secure, high-quality, scalable solutions and reviews architecture solutions designed by others. Drives data architecture decisions that impact data product & platform design, application functionality, and technical operations and processes. Serves as a function-wide subject matter expert in one or more areas of focus. Actively contributes to the data engineering community as an advocate of firmwide data frameworks, tools, and practices in the Software Development Life Cycle. Influences peers and project decision-makers to consider the use and application of leading-edge technologies. Advises junior architects and technologists. Required qualifications, capabilities, and skills: - Formal training or certification on software engineering concepts and 5+ years of applied experience. - Advanced knowledge of architecture, applications, and technical processes with considerable in-depth knowledge in data architecture discipline and solutions (e.g., data modeling, native cloud data services, business intelligence, artificial intelligence, machine learning, data domain driven design, etc.). - Practical cloud-based data architecture and deployment experience, preferably AWS. - Practical SQL development experiences in cloud-native relational databases, e.g. Snowflake, Athena, Postgres. - Ability to deliver various types of data models with multiple deployment targets, e.g. conceptual, logical, and physical data models deployed as operational vs. analytical data stores. - Advanced in one or more data engineering disciplines, e.g. streaming, ELT, event processing. - Ability to tackle design and functionality problems independently with little to no oversight. - Ability to evaluate current and emerging technologies to select or recommend the best solutions for the future state data architecture. Preferred qualifications, capabilities, and skills: - Financial services experience, card and banking a big plus. - Practical experience in modern data processing technologies, e.g., Kafka streaming, DBT, Spark, Airflow, etc. - Practical experience in data mesh and/or data lake. - Practical experience in machine learning/AI with Python development a big plus. - Practical experience in graph and semantic technologies, e.g. RDF, LPG, Neo4j, Gremlin. - Knowledge of architecture assessments frameworks, e.g. Architecture Trade-off Analysis.,
Posted 3 days ago
3.0 - 7.0 years
0 Lacs
maharashtra
On-site
We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. As a Software Engineer III at JPMorgan Chase within the Asset & Wealth Management, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm's business objectives. As a Software Engineer III, your job responsibilities include executing software solutions, design, development, and technical troubleshooting with the ability to think beyond routine or conventional approaches to build solutions or break down technical problems. You will create secure and high-quality production code, maintain algorithms that run synchronously with appropriate systems, and produce architecture and design artifacts for complex applications, ensuring design constraints are met by software code development. Additionally, you will gather, analyze, synthesize, and develop visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems. Proactively identifying hidden problems and patterns in data and using these insights to drive improvements to coding hygiene and system architecture is also part of your responsibilities. You will contribute to software engineering communities of practice and events that explore new and emerging technologies, while adding to the team culture of diversity, opportunity, inclusion, and respect. To qualify for this role, you need formal training or certification on software engineering concepts and at least 3 years of applied experience. Hands-on practical experience in system design, application development, testing, and operational stability is required. Proficiency in Java/J2EE and REST APIs, Python, Web Services, and experience in building event-driven Micro Services and Kafka streaming is essential. Experience in RDBMS and NOSQL database, working proficiency in developmental toolset like GIT/Bitbucket, Jira, and maven, as well as experience with AWS services are necessary. You should also have experience in Spring Framework Services in public cloud infrastructure, proficiency in automation and continuous delivery methods, and be proficient in all aspects of the Software Development Life Cycle. Demonstrated knowledge of software applications and technical processes within a technical discipline, solid understanding of agile methodologies such as CI/CD, Application Resiliency, and Security, and overall knowledge of the Software Development Life Cycle are also required. Additionally, you should have experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages, and in-depth knowledge of the financial services industry and their IT systems. Preferred qualifications, capabilities, and skills for this role include AWS certification, experience on cloud engineering including Pivotal Cloud Foundry, AWS, experience in PERF testing and tuning as well as shift left practices, and DDD (domain-driven design). Experience with MongoDB is also preferred for this position.,
Posted 1 week ago
8.0 - 13.0 years
20 - 30 Lacs
Gurugram
Remote
Remote Work Shift Time- 3:30PM 12:30PM IST Technology: Java 8,15,17, AWS, Kafka Streaming, Gradle/Maven, Docker Job Description: Build and maintain back-end services and APIs using Java 8 and above Core Java 8,15,17 or above Analyzing Java memory and performance issues Knowledge of Java garbage collection Excellent knowledge of Kafka Streaming Knowledge of AWS - ECS, RDS, S3 Knowledge of Gradle/Maven Knowledge of Docker Spring framework Micro services (Spring BOOT) Basic database skills - should be able to write DDL and DML queries, analyze database performance issues
Posted 1 month ago
7.0 - 15.0 years
20 - 36 Lacs
Chennai, Tamil Nadu, India
On-site
About This Role Bounteous x Accolite is a premier end-to-end digital transformation consultancy dedicated to partnering with ambitious brands to create digital solutions for today's complex challenges and tomorrow's opportunities. With uncompromising standards for technical and domain expertise, we deliver innovative and strategic solutions in Strategy, Analytics, Digital Engineering, Cloud, Data & AI, Experience Design, and Marketing. Our Co-Innovation methodology is a unique engagement model designed to align interests and accelerate value creation. Our clients worldwide benefit from the skills and expertise of over 4,000+ expert team members across the Americas, APAC, and EMEA. By partnering with leading technology providers, we craft transformative digital experiences that enhance customer engagement and drive business success. About Bounteous ( https://www.bounteous.com/ ) Founded in 2003 in Chicago, Bounteous is a leading digital experience consultancy that co-innovates with the world's most ambitious brands to create transformative digital experiences. With services in Strategy, Experience Design, Technology, Analytics and Insight, and Marketing, Bounteous elevates brand experiences through technology partnerships and drives superior client outcomes. For more information, please visit www.bounteous.com Information Security Responsibilities Promote and enforce awareness of key information security practices, including acceptable use of information assets, malware protection, and password security protocols Identify, assess, and report security risks, focusing on how these risks impact the confidentiality, integrity, and availability of information assets Understand and evaluate how data is stored, processed, or transmitted, ensuring compliance with data privacy and protection standards (GDPR, CCPA, etc.) Ensure data protection measures are integrated throughout the information lifecycle to safeguard sensitive information Preferred Qualifications 7+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or another quantitative field. Working knowledge ofETL technology - Talend / Apache Ni-fi / AWS Glue Experience with relational SQL and NoSQL databases Experience with big data tools: Hadoop, Spark, Kafka, etc.(Nice to have) Advanced Alteryx Designer (Mandatory at this point - relaxing that would be tough) Tableau Dashboarding AWS (familiarity with Lambda, EC2, AMI) Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.(Nice to have) Experience with cloud services: EMR, RDS, Redshift or Snowflake Experience with stream-processing systems: Storm, Spark-Streaming, etc.(Nice to have) Experience with object-oriented/object function scripting languages: Python, Java, Scala, etc. Responsibilities Work with Project Managers, Senior Architects and other team members from Bounteous & Client teams to evaluate data systems and project requirements In cooperation with platform developers, develop scalable and fault-tolerant Extract Transform Load (ETL) and integration systems for various data platforms which can operate at appropriate scale; meeting security, logging, fault tolerance and alerting requirements. Work on Data Migration Projects. Effectively communicate data requirements of various data platforms to team members Evaluate and document existing data ecosystems and platform capabilities Configure CI/CD pipelines Implement proposed architecture and assist in infrastructure setup
Posted 1 month ago
10.0 - 12.0 years
0 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Job Description: Oracle Cloud Infrastructure (OCI) is a pioneering force in cloud technology, merging the agility of startups with the robustness of an enterprise software leader. Within OCI, the Oracle Generative AI Service team spearheads innovative solutions at the convergence of artificial intelligence and cloud infrastructure. As part of this team, you'll contribute to large-scale cloud solutions utilizing cutting-edge machine learning technologies, aimed at addressing complex global challenges. Join us to create innovative solutions using top-notch machine learning technologies to solve global challenges. We're looking for an experienced Principal Applied Data Scientist to join our OCI Gen-AI Solutions team for strategic customers. In this role, you'll collaborate with applied scientists and product managers to design, develop, and deploy tailored Gen-AI solutions with an emphasis on Large Language Models (LLMs), Agents, MPC and Retrieval Augmented Generation (RAG) with large OpenSearch clusters. As part of the OCI Gen AI and Data Solutions for strategic customers team, you will be responsible for developing innovative Gen AI and data services for our strategic customers.As a Principal Applied Data Scientist, you'll lead the development of advanced Gen AI solutions using the latest ML technologies combined with Oracle's cloud expertise. Your work will significantly impact sectors like financial services, telecom, healthcare, and code generation by creating distributed, scalable, high-performance solutions for strategic customers. Work directly with key customers and accompany them on their Gen AI journey - understanding their requirements, help them envision and design and build the right solutions and work together with their ML engineering to remove blockers. You will dive deep into model structure to optimize model performance and scalability. You will build state of art solutions with brand new technologies in this fast-evolving area. You will configure large scale OpenSearch clusters, setting up ingestion pipelines to get the data into the OpenSearch. You will diagnose, troubleshoot, and resolve issues in AI model training and serving. You may also perform other duties as assigned. Build re-usable solution patterns and reference solutions / showcases that can apply across multiple customers. Be an enthusiastic, self-motivated, and a great collaborator. Be our product evangelist - engage directly with customers and partners, participate and present in external events and conferences, etc. Qualifications and experience Bachelors or master's in computer science or equivalent technical field with 10+ years of experience Able to optimally communicate technical ideas verbally and in writing (technical proposals, design specs, architecture diagrams and presentations). Demonstrated experience in designing and implementing scalable AI models and solutions for production,relevant professional experience as end-to-end solutions engineer or architect (data engineering, data science and ML engineering is a plus), with evidence of close collaborations with PM and Dev teams. Experience with OpenSearch, Vector databases, PostgreSQL and Kafka Streaming. Practical experience with setting up and finetuning large OpenSearch Clusters. Experience in setting up data ingestion pipelines with OpenSearch. Experience with search algorithms, indexing, optimizing latency and response times. Practical experience with the latest technologies in LLM and generative AI, such as parameter-efficient fine-tuning, instruction fine-tuning, and advanced prompt engineering techniques like Tree-of-Thoughts. Familiarity with Agents and Agent frameworks and Model Predictive Control (MPC) Hands-on experience with emerging LLM frameworks and plugins, such as LangChain, LlamaIndex, VectorStores and Retrievers, LLM Cache, LLMOps (MLFlow), LMQL, Guidance, etc. Strong publication record, including as a lead author or reviewer, in top-tier journals or conferences. Ability and passion to mentor and develop junior machine learning engineers. Proficient in Python and shell scripting tools. Preferred Qualifications : Masters or Bachelor's in related field with 5+ years relevant experience Experience with RAG based solutions architecture. Familiarity in OpenSearch and Vector stores as a knowledge store Knowledge of LLM and experience delivering, Generative AI And Agent models are a significant plus. Familiarity and experience with the latest advancements in computer vision and multimodal modeling is a plus. Experience with semantic search, multi-modal search and conversational search. Experience in working on a public cloud environment, and in-depth knowledge of IaaS/PaaS industry and competitive capabilities.Experience with popular model training and serving frameworks like KServe, KubeFlow, Triton etc. Experience with LLM fine-tuning, especially the latest parameter efficient fine-tuning technologies and multi-task serving technologies. Deep technical understanding of Machine Learning, Deep Learning architectures like Transformers, training methods, and optimizers. Experience with deep learning frameworks (such as PyTorch, JAX, or TensorFlow) and deep learning architectures (especially Transformers). Experience in diagnosing, fixing, and resolving issues in AI model training and serving. Career Level - IC4
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough