Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 years
12 Lacs
India
On-site
Experience- 7+ years Location- Hyderabad (preferred), Pune, Mumbai JD- We are seeking a skilled Snowflake Developer with 7+ years of experience in designing, developing, and optimizing Snowflake data solutions. The ideal candidate will have strong expertise in Snowflake SQL, ETL/ELT pipelines, and cloud data integration. This role involves building scalable data warehouses, implementing efficient data models, and ensuring high-performance data processing in Snowflake. Key Responsibilities 1. Snowflake Development & Optimization Design and develop Snowflake databases, schemas, tables, and views following best practices. Write complex SQL queries, stored procedures, and UDFs for data transformation. Optimize query performance using clustering, partitioning, and materialized views. Implement Snowflake features (Time Travel, Zero-Copy Cloning, Streams & Tasks). 2. Data Pipeline Development Build and maintain ETL/ELT pipelines using Snowflake, Snowpark, Python, or Spark. Integrate Snowflake with cloud storage (S3, Blob) and data ingestion tools (Snowpipe). Develop CDC (Change Data Capture) and real-time data processing solutions. 3. Data Modeling & Warehousing Design star schema, snowflake schema, and data vault models in Snowflake. Implement data sharing, secure views, and dynamic data masking. Ensure data quality, consistency, and governance across Snowflake environments. 4. Performance Tuning & Troubleshooting Monitor and optimize Snowflake warehouse performance (scaling, caching, resource usage). Troubleshoot data pipeline failures, latency issues, and query bottlenecks. Work with DevOps teams to automate deployments and CI/CD pipelines. 5. Collaboration & Documentation Work closely with data analysts, BI teams, and business stakeholders to deliver data solutions. Document data flows, architecture, and technical specifications. Mentor junior developers on Snowflake best practices. Required Skills & Qualifications · 8+ years in database development, data warehousing, or ETL. · 4+ years of hands-on Snowflake development experience. · Strong SQL or Python skills for data processing. · Experience with Snowflake utilities (SnowSQL, Snowsight, Snowpark). · Knowledge of cloud platforms (AWS/Azure) and data integration tools (Coalesce, Airflow, DBT). · Certifications: SnowPro Core Certification (preferred). Preferred Skills · Familiarity with data governance and metadata management. · Familiarity with DBT, Airflow, SSIS & IICS · Knowledge of CI/CD pipelines (Azure DevOps). Job Type: Full-time Pay: From ₹1,200,000.00 per year Schedule: Monday to Friday Application Question(s): How many years of total experience do you currently have? How many years of experience do you have in Snowflake development? How many years of experience do you have with DBT? What is your current CTC? What is your expected CTC? What is your notice period/ LWD? What is your current location? Are you comfortable attending 1st round face to face on 2nd Aug (Saturday) in Hyderabad, Mumbai or Pune office?
Posted 4 days ago
5.0 - 9.0 years
7 - 8 Lacs
Hyderābād
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Data Science Engineer What you will do Let’s do this. Let’s change the world. In this vital role We are seeking a highly skilled Machine Learning Engineer with a strong MLOps background to join our team. You will play a pivotal role in building and scaling our machine learning models from development to production. Your expertise in both machine learning and operations will be essential in creating efficient and reliable ML pipelines. Roles & Responsibilities: Collaborate with data scientists to develop, train, and evaluate machine learning models. Build and maintain MLOps pipelines, including data ingestion, feature engineering, model training, deployment, and monitoring. Leverage cloud platforms (AWS, GCP, Azure) for ML model development, training, and deployment. Implement DevOps/MLOps best practices to automate ML workflows and improve efficiency. Develop and implement monitoring systems to track model performance and identify issues. Conduct A/B testing and experimentation to optimize model performance. Work closely with data scientists, engineers, and product teams to deliver ML solutions. Stay updated with the latest trends and advancements What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master's degree / Bachelor's degree and 5 to 9 years [Job Code’s Discipline and/or Sub-Discipline] Functional Skills: Must-Have Skills: Solid foundation in machine learning algorithms and techniques Experience in MLOps practices and tools (e.g., MLflow, Kubeflow, Airflow); Experience in DevOps tools (e.g., Docker, Kubernetes, CI/CD) Proficiency in Python and relevant ML libraries (e.g., TensorFlow, PyTorch, Scikit-learn) Outstanding analytical and problem-solving skills; Ability to learn quickly; Good communication and interpersonal skills Good-to-Have Skills: Experience with big data technologies (e.g., Spark, Hadoop), and performance tuning in query and data processing Experience with data engineering and pipeline development Experience in statistical techniques and hypothesis testing, experience with regression analysis, clustering and classification Knowledge of NLP techniques for text analysis and sentiment analysis Experience in analyzing time-series data for forecasting and trend analysis What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 4 days ago
1.0 - 5.0 years
2 - 8 Lacs
Madurai
On-site
Job Description: Setup , administer and support MySQL databases servers for Production , QA and Development. Monitor databases for performance , bottlenecks and other issues , identify and deploy solutions. Perform appropriate backup , restoration and upgrades of database servers Create / refine database complex queries , indexes , stored procedures & bulk data extraction Build database tools and scripts to automate where possible Proactively monitor the database systems to ensure secure services with minimum downtime Experience in performance tuning and database monitoring tools will be an advantage Experience in Replication / Clustering / Tuning / Sizing and Monitoring will be an advantage Work in a 24x7 support environment with different shifts. May need to work on Weekends and holidays. Skills: 1-5 years of MySQL/MariaDB experience Proficient in day to day database support and monitoring Experience with scripting Excellent oral and written communication skills Exceptional problem-solving expertise and attention to detail Candidate must have the ability to complete tasks with very little supervision and superb ability to work well with others in a team environment MySQL Certification will be a plus. Shift : Rotational Shifts/Weekoffs will on weekdays/Nightshift 07:30 AM - 04:30 PM 08:30 AM - 05:30 PM 09:00 AM - 06:00 PM 09:30 AM - 06:30 PM 10:30 AM - 07:30 PM 01.30 PM - 10.30 PM 03:30 PM - 12:30 AM 10:30 PM - 07.30 AM Job Types: Full-time, Permanent Benefits: Health insurance Provident Fund Schedule: Rotational shift Weekend availability Supplemental Pay: Performance bonus Shift allowance Experience: MySQL DBA: 3 years (Required) Location: Madurai, Tamil Nadu (Required) Work Location: In person
Posted 4 days ago
0 years
0 Lacs
Bengaluru
Remote
About Us Always open. Our code, our culture, our opportunities. Leading open innovation without limits. We are SUSE. SUSE is a global leader in innovative, reliable and secure enterprise open source solutions, including SUSE Linux Enterprise (SLE), Rancher and NeuVector. More than 60% of the Fortune 500 rely on SUSE to power their mission-critical workloads, enabling them to innovate everywhere – from the data center to the cloud, to the edge and beyond. SUSE puts the “open” back in open source, collaborating with partners and communities to give customers the agility to tackle innovation challenges today and the freedom to evolve their strategy and solutions tomorrow. We are open in our roots and open in our approach, striving to be the most trusted open innovator in the World. Openness extends beyond our technology. Our vibrant community thrives on diversity and connectivity without borders. Linux Support Engineer Job Description About Us Always open. Our code, our culture, our opportunities. Leading open innovation without limits. We are SUSE. SUSE is a global leader in innovative, reliable and secure enterprise open source solutions, including SUSE Linux Enterprise Server, SUSE Rancher Prime, SUSE Multi Linux Manager, SUSE Edge. More than 60% of the Fortune 500 rely on SUSE to power their mission-critical workloads, enabling them to innovate everywhere – from the data center to the cloud, to the edge and beyond. SUSE puts the “open” back in open source, collaborating with partners and communities to give customers the agility to tackle innovation challenges today and the freedom to evolve their strategy and solutions tomorrow. We are open in our roots and open in our approach, striving to be the most trusted open innovator in the World. Openness extends beyond our technology. Our vibrant community thrives on diversity and connectivity without borders. Linux Support Engineer Job Description The Role As a Linux Support Engineer you will: Provide OS support to users of the SUSE Enterprise Linux product portfolio. Troubleshoot challenging complex, critical, and sensitive customer issues related to: Installation errors, configuration errors, out of the box not working functionality, booting issues. Investigate usage problems, unexpected product behaviour, performance degradation, and root cause analysis. Replicate customer issues in a technical lab environment to provide optimal solutions. . Communicate with customers through email, remote sessions, and occasionally via telephone. Be part of the APAC team which makes up a 24x7 (follow the sun) support organisation. Provide technical support and maintain professional communication with SUSE’s customers. Collaborate efficiently with your technical support colleagues, globally. Engage with development and product management on bugs and feature requests. Continuously contribute and collaborate on knowledge resources improvement and creation. This position is not eligible for remote work and will be based in our Bengaluru office. Be required to work on call and weekends based on a shift rotation policy. Preferred technical experience & skills A solid understanding and experience with the Linux operating system, preferably certified (SCA, RHCSA, LPIC-1). Several years of experience in a technical support role or as a system administrator for any Linux OS. The ability to troubleshoot various aspects of the Linux operating system. The ability to adapt to new technologies. Basic Bash scripting. Expertise or understanding (at minimum) in the below areas: High Availability / Clustering technologies Storage technologies (like SAN, multipathing, iSCSI, LVM) Networking concepts and protocols LDAP, Kerberos, Samba, Active Directory Non-x86_64 architectures Personal Attributes All candidates should be fluent in English (written and verbal) A strong sense of responsibility, self-motivation, and the ability to prioritise and organise multiple, simultaneous workloads. The ability to assess the customer situation and select the best path forward. Interpersonal communication skills, in both oral and written form. The ability to communicate complex technical information to customers in a clear and simple way. Experienced in providing a timely and accurate response and resolution to customer issues over the phone or electronically. The ability to work efficiently in a dynamic and collaborative environment with a team of highly skilled and motivated engineers. Respectful, patient, and professional approach in line with SUSE values. SUSE Values We are passionate about customers We are respectful and inclusive We are empowered and accountable We are trustworthy and act with integrity We are collaborative We are SUSE! Job What We Offer We empower you to be bold, driving your career to create the future you want. We celebrate and reward your achievements. SUSE is a dynamic environment that is evolving rapidly, thus requiring agility, strong entrepreneurship and an open mind. This is a compelling opportunity for the right person to join us as we continue to scale and prosper. If you’re a big thinker, obsessed by execution and thrive in a dynamic environment in which you can tangibly create a lasting legacy, ! We give you the freedom to be yourself. You will work in a global community of unique individuals – like you – with different backgrounds, talents, skills and perspectives. A truly open community where everyone is welcome, has a voice and is encouraged to reach their full potential regardless of age, gender, race, nationality, disability, sexual orientation, religion, or any other characteristics. Does it sound like the right fit for you? . A recruiter will contact you if your skills match our current or any future positions. In the meantime, stay updated on the latest SUSE news and job vacancies by joining our Talent Community . Job Services What We Offer We empower you to be bold, driving your career to create the future you want. We celebrate and reward your achievements. SUSE is a dynamic environment that is evolving rapidly, thus requiring agility, strong entrepreneurship and an open mind. This is a compelling opportunity for the right person to join us as we continue to scale and prosper. If you’re a big thinker, obsessed by execution and thrive in a dynamic environment in which you can tangibly create a lasting legacy, ! We give you the freedom to be yourself. You will work in a global community of unique individuals – like you – with different backgrounds, talents, skills and perspectives. A truly open community where everyone is welcome, has a voice and is encouraged to reach their full potential regardless of age, gender, race, nationality, disability, sexual orientation, religion, or any other characteristics. Sounds like the right fit for you? . A recruiter will contact you if your skills match our current or any future positions. In the meantime, stay updated on the latest SUSE news and job vacancies by joining our Talent Community . SUSE Values Choice Innovation Trust Community
Posted 4 days ago
9.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Who We Are Wayfair is moving the world so that anyone can live in a home they love – a journey enabled by more than 3,000 Wayfair engineers and a data-centric culture. Wayfair’s Advertising business is rapidly expanding, adding hundreds of millions of dollars in profits to Wayfair. We are building Sponsored Products, Display & Video Ad offerings that cater to a variety of Advertiser goals while showing highly relevant and engaging Ads to millions of customers. We are evolving our Ads Platform to empower advertisers across all sophistication levels to grow their business on Wayfair at a strong, positive ROI and are leveraging state of the art Machine Learning techniques. The Advertising Optimization & Automation Science team is central to this effort. We leverage machine learning and generative AI to streamline campaign workflows, delivering impactful recommendations on budget allocation, target Return on Ad Spend (tROAS), and SKU selection. Additionally, we are developing intelligent systems for creative optimization and exploring agentic frameworks to further simplify and enhance advertiser interactions. We are looking for an experienced Senior Machine Learning Scientist to join the Advertising Optimization & Automation Science team. In this role, you will be responsible for building intelligent, ML-powered systems that drive personalized recommendations and campaign automation within Wayfair’s advertising platform. You will work closely with other scientists, as well as members of our internal Product and Engineering teams, to apply your ML expertise to define and deliver 0-to-1 capabilities that unlock substantial commercial value and directly enhance advertiser outcomes. What You’ll do Design and build intelligent budget, tROAS, and SKU recommendations, and simulation-driven decisioning that extends beyond the current advertising platform capabilities. Lead the next phase of GenAI-powered creative optimization and automation to drive significant incremental ad revenue and improve supplier outcomes. Raise technical standards across the team by promoting best practices in ML system design and development. Partner cross-functionally with Product, Engineering, and Sales to deliver scalable ML solutions that improve supplier campaign performance. Ensure systems are designed for reuse, extensibility, and long-term impact across multiple advertising workflows. Research and apply best practices in advertising science, GenAI applications in creative personalization, and auction modeling. Keep Wayfair at the forefront of innovation in supplier marketing optimization. Collaborate with Engineering teams (AdTech, ML Platform, Campaign Management) to build and scale the infrastructure needed for automated, intelligent advertising decisioning. We Are a Match Because You Have : Bachelor's or Master’s degree in Computer Science, Mathematics, Statistics, or related field. 9+ years of experience in building large scale machine learning algorithms. 4+ years of experience working in an architect or technical leadership position. Strong theoretical understanding of statistical models such as regression, clustering and ML algorithms such as decision trees, neural networks, transformers and NLP techniques. Proficiency in programming languages such as Python and relevant ML libraries (e.g., TensorFlow, PyTorch) to develop production-grade products. Strategic thinker with a customer-centric mindset and a desire for creative problem solving, looking to make a big impact in a growing organization. Demonstrated success influencing senior level stakeholders on strategic direction based on recommendations backed by in-depth analysis; Excellent written and verbal communication. Ability to partner cross-functionally to own and shape technical roadmaps Intellectual curiosity and a desire to always be learning! Nice to have Experience with GCP, Airflow, and containerization (Docker). Experience building scalable data processing pipelines with big data tools such as Hadoop, Hive, SQL, Spark, etc. Familiarity with Generative AI and agentic workflows. Experience in Bayesian Learning, Multi-armed Bandits, or Reinforcement Learning. About Wayfair Inc. Wayfair is one of the world’s largest online destinations for the home. Through our commitment to industry-leading technology and creative problem-solving, we are confident that Wayfair will be home to the most rewarding work of your career. If you’re looking for rapid growth, constant learning, and dynamic challenges, then you’ll find that amazing career opportunities are knocking. No matter who you are, Wayfair is a place you can call home. We’re a community of innovators, risk-takers, and trailblazers who celebrate our differences, and know that our unique perspectives make us stronger, smarter, and well-positioned for success. We value and rely on the collective voices of our employees, customers, community, and suppliers to help guide us as we build a better Wayfair – and world – for all. Every voice, every perspective matters. That’s why we’re proud to be an equal opportunity employer. We do not discriminate on the basis of race, color, ethnicity, ancestry, religion, sex, national origin, sexual orientation, age, citizenship status, marital status, disability, gender identity, gender expression, veteran status, genetic information, or any other legally protected characteristic. We are interested in retaining your data for a period of 12 months to consider you for suitable positions within Wayfair. Your personal data is processed in accordance with our Candidate Privacy Notice (which can found here: https://www.wayfair.com/careers/privacy). If you have any questions regarding our processing of your personal data, please contact us at dataprotectionofficer@wayfair.com. If you would rather not have us retain your data please contact us anytime at dataprotectionofficer@wayfair.com.
Posted 4 days ago
3.0 years
0 Lacs
India
On-site
Rust Developer (Scalable Systems) Experience: 3+ years Location: Ahmedabad, Gujarat Employment Type: Full-time Key Responsibilities: Design, develop, and optimize high-performance backend services using Rust , targeting 1000+ orders per second throughput. Implement scalable architectures with load balancing for high availability and minimal latency. Integrate and optimize Redis for caching, pub/sub, and data persistence. Work with messaging services like Kafka and RabbitMQ to ensure reliable, fault-tolerant communication between microservices. Develop and manage real-time systems with WebSockets for bidirectional communication. Write clean, efficient, and well-documented code with unit and integration tests. Collaborate with DevOps for horizontal scaling and efficient resource utilization. Diagnose performance bottlenecks and apply optimizations at the code, database, and network level. Ensure system reliability, fault tolerance, and high availability under heavy loads. Required Skills & Experience: 3+ years of professional experience with Rust in production-grade systems. Strong expertise in Redis (clustering, pipelines, Lua scripting, performance tuning). Proven experience with Kafka , RabbitMQ , or similar messaging queues. Deep understanding of load balancing, horizontal scaling , and distributed architectures. Experience with real-time data streaming and WebSocket implementations. Knowledge of system-level optimizations, memory management, and concurrency in Rust. Familiarity with high-throughput, low-latency systems and profiling tools. Understanding of cloud-native architectures (AWS, GCP, or Azure) and containerization (Docker/Kubernetes). Preferred Qualifications: Experience with microservices architecture and service discovery . Knowledge of monitoring & logging tools (Prometheus, Grafana, ELK). Exposure to CI/CD pipelines for Rust-based projects. Experience in security and fault-tolerant design for financial or trading platforms (nice to have). Job Types: Full-time, Permanent Experience: Rust Developer: 1 year (Required) Work Location: In person
Posted 4 days ago
0 years
0 Lacs
India
Remote
🤖 Machine Learning Intern – Remote | Learn AI by Building It 📍 Location: Remote / Virtual 💼 Type: Internship (Unpaid) 🎁 Perks: Certificate After Completion || Letter of Recommendation (6 Months) 🕒 Schedule: 5–7 hrs/week | Flexible Timing Join Skillfied Mentor as a Machine Learning Intern and move beyond online courses. You’ll work on real datasets, build models, and see your algorithms in action — all while gaining experience that hiring managers actually look for. Whether you're aiming for a career in AI, data science, or automation — this internship will build your foundation with hands-on learning. 🔧 What You’ll Do: Work with real datasets to clean, preprocess, and transform data Build machine learning models using Python, NumPy, Pandas, Scikit-learn Perform classification, regression, and clustering tasks Use Jupyter Notebooks for experimentation and documentation Collaborate on mini-projects and model evaluation tasks Present insights in simple, digestible formats 🎓 What You’ll Gain: ✅ Full Python course included during the internship ✅ Hands-on projects to showcase on your resume or portfolio ✅ Certificate of Completion + LOR (6-month internship) ✅ Experience with industry-relevant tools & techniques ✅ Remote flexibility — manage your time with just 5–7 hours/week 🗓️ Application Deadline: 1st August 2025 👉 Apply now to start your ML journey with Skillfied Mentor
Posted 4 days ago
4.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Description You are a strategic thinker passionate about driving solutions in “Data Science ”. You have found the right team. As a Data Science professional within our “ Asset Management team” , you will spend each day defining, refining and delivering set goals for our firm The Asset Management Data Science team is focused on enhancing and facilitating various steps in the investment process ranging from financial analysis and portfolio management to client services and advisory. You will utilize a large collection of textual data including financial documents, analyst reports, news, meeting notes and client communications along with more typical structured datasets. You will apply the latest methodologies to generate actionable insights to be directly consumed by our business partners. About Are you excited about using data science and machine learning to make a real impact in the asset management industry? Do you enjoy working with cutting-edge technologies and collaborating with a team of dedicated professionals? If so, the Data Science team at JP Morgan Asset Management could be the perfect fit for you. Here’s why: Real-World Impact: Your work will directly contribute to improving investment process and enhancing client experiences and operational process, making a tangible difference in our asset management business. Collaborative Environment: Join a team that values collaboration and teamwork. You’ll work closely with business stakeholders and technologists to develop and implement effective solutions. Continuous Learning: We support your professional growth by providing opportunities to learn and experiment with the latest data science and machine learning techniques. Job Responsibilities Collaborate with internal stakeholders to identify business needs and develop NLP/ML solutions that address client needs and drive transformation. Apply large language models (LLMs), machine learning (ML) techniques, and statistical analysis to enhance informed decision-making and improve workflow efficiency, which can be utilized across investment functions, client services, and operational process. Collect and curate datasets for model training and evaluation. Perform experiments using different model architectures and hyperparameters, determine appropriate objective functions and evaluation metrics, and run statistical analysis of results. Monitor and improve model performance through feedback and active learning. Collaborate with technology teams to deploy and scale the developed models in production. Deliver written, visual, and oral presentation of modeling results to business and technical stakeholders. Stay up-to-date with the latest research in LLM, ML and data science. Identify and leverage emerging techniques to drive ongoing enhancement. Required Qualifications, Capabilities, And Skills Advanced degree (MS or PhD) in a quantitative or technical discipline or significant practical experience in industry. Minimum of 4 years of experience in applying NLP, LLM and ML techniques in solving high-impact business problems, such as semantic search, information extraction, question answering, summarization, personalization, classification or forecasting. Advanced python programming skills with experience writing production quality code Good understanding of the foundational principles and practical implementations of ML algorithms such as clustering, decision trees, gradient descent etc. Hands-on experience with deep learning toolkits such as PyTorch, Transformers, HuggingFace. Strong knowledge of language models, prompt engineering, model finetuning, and domain adaptation. Familiarity with latest development in deep learning frameworks. Ability to communicate complex concepts and results to both technical and business audiences. Preferred Qualifications, Capabilities, And Skills Prior experience in an Asset Management line of business Exposure to distributed model training, and deployment Familiarity with techniques for model explainability and self validation About Us JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. About The Team J.P. Morgan Asset & Wealth Management delivers industry-leading investment management and private banking solutions. Asset Management provides individuals, advisors and institutions with strategies and expertise that span the full spectrum of asset classes through our global network of investment professionals. Wealth Management helps individuals, families and foundations take a more intentional approach to their wealth or finances to better define, focus and realize their goals.
Posted 4 days ago
9.0 years
15 - 28 Lacs
Mumbai Metropolitan Region
On-site
This role is for one of Weekday's clients Salary range: Rs 1500000 - Rs 2800000 (ie INR 15-28 LPA) Min Experience: 9 years Location: Mumbai JobType: full-time Requirements We are seeking a highly skilled and experienced Oracle Architect to lead the design, development, and management of Oracle database solutions across our enterprise landscape. The ideal candidate will bring deep expertise in Oracle RAC (Real Application Clusters) , RMAN , OEM , Data Guard , and other advanced Oracle technologies. This is a key role responsible for defining best practices, optimizing performance, ensuring high availability, and supporting complex data environments for mission-critical systems. Key Responsibilities 2. Oracle RAC and High Availability 3. Monitoring, Backup, and Recovery 4. Database Performance Optimization 5. Data Integration and Migration 6. Database Administration and Security Database Architecture and Design Architect robust, scalable, and secure Oracle database solutions aligned with business needs and future growth. Define and implement database standards, frameworks, and design patterns to support enterprise-wide Oracle systems. Collaborate with cross-functional teams to understand business requirements and translate them into database architecture specifications. Expertly configure, manage, and maintain Oracle RAC environments to ensure system uptime and fault tolerance. Implement and manage Data Guard for disaster recovery and high availability configurations. Provide guidance to network and infrastructure teams on Oracle RAC networking and clustering requirements. Utilize Oracle Enterprise Manager (OEM) for proactive monitoring, capacity planning, and system health checks. Manage and implement effective backup and recovery strategies using RMAN, ensuring data integrity and minimal downtime. Automate health checks, alerts, and diagnostic procedures to maintain system performance and availability. Lead performance tuning initiatives, including SQL query optimization, index tuning, and memory management. Design and implement partitioning, caching, and clustering strategies to handle large datasets and maximize throughput. Conduct regular system audits and capacity planning reviews to anticipate and mitigate potential bottlenecks. Plan and execute seamless Oracle upgrades, patching, and database migrations with minimal disruption to business operations. Integrate Oracle systems with external applications using tools like Oracle GoldenGate, ensuring data consistency and near real-time synchronization. Lead efforts around data consolidation, transformation, and modernization initiatives. Provide day-to-day operational support for Oracle environments including health monitoring, issue resolution, and configuration management. Ensure compliance with data security standards, implementing user access controls, encryption, and auditing procedures. Maintain detailed documentation on architecture, configurations, and procedures. Required Skills & Qualifications Minimum 9 years of hands-on experience in Oracle database administration and architecture. Expertise in Oracle RAC, OEM, RMAN, ASM, Data Guard, and GoldenGate. Proficiency in Oracle 11g/12c/19c environments. Strong knowledge of SQL, PL/SQL, and database performance tuning techniques. Proven experience with backup, recovery, and disaster recovery strategies. Familiarity with enterprise CI/CD practices and scripting is a plus. Strong problem-solving, analytical, and communication skills. Ability to work independently and manage multiple high-impact projects simultaneously
Posted 4 days ago
0 years
0 Lacs
Mumbai Metropolitan Region
Remote
Work place – Remote Job title – Guardicore Security Engineer GCM level – GCM5 Type of contract (permanent or temporary) – Reason for vacancy ..replacement or new business – Responding to a CNA RFP and requires offshore Guardicore SME with hands on experience and able to step in on day 1 to fully manage the CNA Guardicore environment. ß -----Everything below this line must be included in the job posting------ à Job Description - The CyberSecurity Services Division of North America Atos is currently looking for a mid level Guardicore Security Engineer to be a part of the Platform Security team responsible for delivering and maintaining Atos Cyber Security solutions to our customer. Core Responsibilities You will act as a mid level engineer on the Guardicore initiative for one of our clients based in the US. You will provide direct support to end users, handle tickets, be an escalation point for P1 and P2 tickets, manage the configuration, maintenance\upkeep activities, create\update documentation, and steady state mode of operation of the Guardicore software suite, ensuring adequate timeframes and resources are provided to the client to ensure success. You will assist in the technical review and provide guidance on configuration changes and recommend best practices of the Guardicore software. You will act as a member of the IT Security team to provide support to the Security Tooling, Security Operations Centre and Service owners teams on matters related to Guardicore. You will monitor and manage the client’s Guardicore deployment, including troubleshooting any observed anomalies. You will provide reporting and metrics on the Guardicore deployment in the client’s environment. You will develop and maintain Security Rulesets and Policies related to integrating customer services into the Guardicore platform. Minimum Qualifications Direct experience with deploying, configuring, or maintaining the Guardicore application Broad understanding of the following technology types Multi-layer applications, databases, web applications, load balancing, clustering, routing/switching, IP addressing, routing and subnetting, PKI, firewall technologies. Ability to develop and maintain configuration management of complex system configurations Experience with providing technical support for network architecture, design, engineering, and maintenance Thorough understanding of network flows, protocols and application related security controls. Understanding and familiarity with desktop and server operating systems, experience in performance tuning, monitoring and statics/metrics collection Understanding of enterprise environment components DNS/DHCP/AD/VLANS/Firewall/DMZ Ability to read information system data, including, but not limited to, security and network event logs and firewall logs Ability to demonstrate strategic problem solving, good decision making and sound judgment Bachelor’s degree in a computer-related field such as computer science, information technology or a cyber security specialization or equivalent experience Excellent troubleshooting techniques and analytical skills Excellent written and oral communication skills while working with a remote team Able to work in a dynamic environment and manage multiple projects while managing you own time and tasks with minimal supervision Preferred Certifications Guardicore Certified Segmentation Administrator (GCSA) Security+/Network+ Forrester Zero Trust Strategy Addition Skills Experience with enterprise security solutions Experience as a Windows and/or Network Security Administrator
Posted 4 days ago
0 years
0 Lacs
India
Remote
Job Title: Machine Learning Developer Company: Lead India Location: Remote Job Type: Full-Time Salary: ₹3.5 LPA About Lead India: Lead India is a forward-thinking organization focused on creating social impact through technology, innovation, and data-driven solutions. We believe in empowering individuals and building platforms that make governance more participatory and transparent. Job Summary: We are looking for a Machine Learning Developer to join our remote team. You will be responsible for building and deploying predictive models, working with large datasets, and delivering intelligent solutions that enhance our platform’s capabilities and user experience. Key Responsibilities: Design and implement machine learning models for classification, regression, and clustering tasks Collect, clean, and preprocess data from various sources Evaluate model performance using appropriate metrics Deploy machine learning models into production environments Collaborate with data engineers, analysts, and software developers Continuously research and implement state-of-the-art ML techniques Maintain documentation for models, experiments, and code Required Skills and Qualifications: Bachelor’s degree in Computer Science, Data Science, or a related field (or equivalent practical experience) Solid understanding of machine learning algorithms and statistical techniques Hands-on experience with Python libraries such as scikit-learn, pandas, NumPy, and matplotlib Familiarity with Jupyter notebooks and experimentation workflows Experience working with datasets using tools like SQL or Excel Strong problem-solving skills and attention to detail Ability to work independently in a remote environment Nice to Have: Experience with deep learning frameworks like TensorFlow or PyTorch Exposure to cloud-based ML platforms (e.g., AWS SageMaker, Google Vertex AI) Understanding of model deployment using Flask, FastAPI, or Docker Knowledge of natural language processing or computer vision What We Offer: Fixed annual salary of ₹3.5 LPA 100% remote work and flexible hours Opportunity to work on impactful, mission-driven projects using real-world data Supportive and collaborative environment for continuous learning and innovation
Posted 4 days ago
0.0 - 1.0 years
0 Lacs
Satellite, Ahmedabad, Gujarat
On-site
Rust Developer (Scalable Systems) Experience: 3+ years Location: Ahmedabad, Gujarat Employment Type: Full-time Key Responsibilities: Design, develop, and optimize high-performance backend services using Rust , targeting 1000+ orders per second throughput. Implement scalable architectures with load balancing for high availability and minimal latency. Integrate and optimize Redis for caching, pub/sub, and data persistence. Work with messaging services like Kafka and RabbitMQ to ensure reliable, fault-tolerant communication between microservices. Develop and manage real-time systems with WebSockets for bidirectional communication. Write clean, efficient, and well-documented code with unit and integration tests. Collaborate with DevOps for horizontal scaling and efficient resource utilization. Diagnose performance bottlenecks and apply optimizations at the code, database, and network level. Ensure system reliability, fault tolerance, and high availability under heavy loads. Required Skills & Experience: 3+ years of professional experience with Rust in production-grade systems. Strong expertise in Redis (clustering, pipelines, Lua scripting, performance tuning). Proven experience with Kafka , RabbitMQ , or similar messaging queues. Deep understanding of load balancing, horizontal scaling , and distributed architectures. Experience with real-time data streaming and WebSocket implementations. Knowledge of system-level optimizations, memory management, and concurrency in Rust. Familiarity with high-throughput, low-latency systems and profiling tools. Understanding of cloud-native architectures (AWS, GCP, or Azure) and containerization (Docker/Kubernetes). Preferred Qualifications: Experience with microservices architecture and service discovery . Knowledge of monitoring & logging tools (Prometheus, Grafana, ELK). Exposure to CI/CD pipelines for Rust-based projects. Experience in security and fault-tolerant design for financial or trading platforms (nice to have). Job Types: Full-time, Permanent Experience: Rust Developer: 1 year (Required) Work Location: In person
Posted 4 days ago
0 years
0 Lacs
India
On-site
#Data Scientist #Data analysis #Retrieval-Augmented Generation (RAG) #Dataanalysis #EDA # #NumPy #scikit-learn #pandas #NLP #NLR #FAISS #AWS #BERT #Python #scikit Job Overview: • Build, train, and validate machine learning models for prediction, classification, and clustering to support NBA use cases. • Conduct exploratory data analysis (EDA) on both structured and unstructured data to extract actionable insights and identify behavioral drivers. • Design and deploy A/B testing frameworks and build pipelines for model evaluation and continuous monitoring. • Develop vectorization and embedding pipelines using models like Word2Vec, BERT, to enable semantic understanding and similarity search. • Implement Retrieval-Augmented Generation (RAG) workflows to enrich recommendations by integrating internal and external knowledge bases. • Collaborate with cross-functional teams (engineering, product, marketing) to deliver data-driven Next Best Action strategies. • Present findings and recommendations clearly to technical and non-technical stakeholders. Required Skills & Experience: • Strong programming skills in Python, including libraries like pandas, NumPy, and scikit-learn. • Practical experience with text vectorization and embedding generation (Word2Vec, BERT, SBERT, etc.). • Proficiency in Prompt Engineering and hands-on experience in building RAG pipelines using LangChain, Haystack, or custom frameworks. • Familiarity with vector databases (e.g., PostgreSQL with pgvector, FAISS, Pinecone, Weaviate). • Expertise in Natural Language Processing (NLP) tasks such as NER, text classification, and topic modeling. • Sound understanding of supervised learning, recommendation systems, and classification algorithms. • Exposure to cloud platforms (AWS, GCP, Azure) and containerization tools (Docker, Kubernetes) is a plus.
Posted 5 days ago
5.0 - 8.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Only Immediate joiner or 15 days Notice 5-8 years of experience in engineering infrastructure design; 3 years in cloud engineering roles with experience in leading teams as a lead engineer / architect Expertise in infrastructure as code (e.g., cloud formation, Terraform Enterprise, Ansible) Experience / working knowledge with configuring, deploying and operating public cloud services (e.g. Azure, AWS, GCP) Basic familiarity with network and security features e.g. cloud network topology, BGP, routing, TCP/IP, DNS, SMTP, HTTPS, Security, Guardrails etc. Good understanding and knowledge on Container platforms eg – Docker, Kubernetes, EKS, GKE, Openshift etc Experience working on Linux based infrastructure, databases such as RDS,MySQL, Mongo, Postgres etc Knowledge of AWS architectural principles and key networking services such as AWS Global infrastructure, VPC, Projects, S3 buckets, EC2, Route 53, Transit Gateway, Direct Connect, VNet, VNet-Peering, Private Link, vWAN, Express Route, Firewall, Load Balancer etc. Good understanding on network/ security protocols and cloud security services Experience with continuous integration and related tools such as Jenkins, Hudson, Maven, Ant, Git, Sonar, etc. Hands-on with the Azure, GCP, and AWS native and/or third-party cost management tool High availability engineering experience (region, availability zone, data replication clustering) and cloud backup . Awareness in open-source tools & scripting language (e.g. Python, PowerShell, Shell) to Automate any manual day today task related to Cloud. Understanding of network architectures suitable for different cloud topologies with familiarity with user expectations / OLAs for cloud services Good knowledge of security implications of public & private cloud infra design
Posted 5 days ago
4.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Overview: We are looking for a Senior Data Scientist with a strong foundation in machine learning, data analysis, and a growing expertise in LLMs and Gen AI. The ideal candidate will be passionate about uncovering insights from data, proposing impactful use cases, and building intelligent solutions that drive business value. Key Responsibilities: Analyze structured and unstructured data to identify trends, patterns, and opportunities. Propose and validate AI/ML use cases based on business data and stakeholder needs. Build, evaluate, and deploy machine learning models for classification, regression, clustering, etc. Work with LLMs and GenAI tools to prototype and integrate intelligent solutions (e.g., chatbots, summarization, content generation). Collaborate with data engineers, product managers, and business teams to deliver end-to-end solutions. Ensure data quality, model interpretability, and ethical AI practices. Document experiments, share findings, and contribute to knowledge sharing within the team Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, or related field. 3–4 years of hands-on experience in data science and machine learning. Proficient in Python and ML libraries Experience with data wrangling, feature engineering, and model evaluation. Exposure to LLMs and GenAI tools (e.g., Hugging Face, LangChain, OpenAI APIs). Familiarity with cloud platforms (AWS, GCP, or Azure) and version control (Git). Strong communication and storytelling skills with a data-driven mindset.
Posted 5 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Senior Consultant Specialist In this role, you will: Work closely with the product and delivery teams for BAU and project deliveries. Collaborate with the architects in planning and designing the best of solution. Efficient management of compliances and vulnerabilities associated with service. Work along with wider central teams on managing dependencies and working out with compliances fixes. Give your best with always thinking ahead of time and always having an automation mindset. Participate with the PI planning, present your ideas and views with the solutions used within value streams. Documentation is key for a healthy service to operate properly; hence it’s expected that we document everything that’s valuable for the team and our customers. Requirements To be successful in this role, you should meet the following requirements: Strong cloud knowledge with exposure to public cloud. Excellent understanding of CICD services especially Jenkins and experience with developing groovy pipelines. Strong knowledge of infrastructure in general including Operating System, Clustering, Storage, network, CI/CD pipelines etc. Proven problem solver who doesn’t seek handholding and determined in fixing the problems and finding the root causes. Have an automation/change mentality and strive for constant improvements via automated processes. Excellent written and verbal communication skills. The candidate must be able to write technical documentation and provide efficient problem statements. Great communication - convey your thoughts, ideas and opinions clearly and concisely face-to-face or virtually to all levels up and down stream Be comfortable following Scrum methodology and working within an agile, multidisciplinary team. The key technical skills which are mandatorily needed as follows:Groovy and Pipelines,CloudBees CI/Jenkins,Java,Python,AWS,Git,Terraform,Maven The desirable technical skills are as follows :Docker, Splunk, Bash You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India
Posted 5 days ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Exp- 5+ Years Location- Bangalore/Pune/Mumbai/Delhi NCR/Hyderabad Budget- Upto 30 LPA Required Skills Python: Hands-on experience with threading limitations and multi-process architecture. MySQL: Ability to integrate multiple data sources using MySQL. Strong coding knowledge and experience with several languages (e.g., R, SQL, JavaScript, Java, CSS, C++). Familiarity with statistical and data mining techniques (e.g., GLM/Regression, Random Forest, Boosting, Trees, text mining, social network analysis). Experience with advanced machine learning algorithms and statistics: regression, simulation, scenario analysis, modeling, clustering, decision trees, neural networks, etc
Posted 5 days ago
9.0 - 15.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title- Snowflake Data Architect Experience- 9 to 15 Years Location- Gurugram Job Summary: We are seeking a highly experienced and motivated Snowflake Data Architect & ETL Specialist to join our growing Data & Analytics team. The ideal candidate will be responsible for designing scalable Snowflake-based data architectures, developing robust ETL/ELT pipelines, and ensuring data quality, performance, and security across multiple data environments. You will work closely with business stakeholders, data engineers, and analysts to drive actionable insights and ensure data-driven decision-making. Key Responsibilities: Design, develop, and implement scalable Snowflake-based data architectures. Build and maintain ETL/ELT pipelines using tools such as Informatica, Talend, Apache NiFi, Matillion, or custom Python/SQL scripts. Optimize Snowflake performance through clustering, partitioning, and caching strategies. Collaborate with cross-functional teams to gather data requirements and deliver business-ready solutions. Ensure data quality, governance, integrity, and security across all platforms. Migrate legacy data warehouses (e.g., Teradata, Oracle, SQL Server) to Snowflake. Automate data workflows and support CI/CD deployment practices. Implement data modeling techniques including dimensional modeling, star/snowflake schema, normalization/denormalization. Support and promote metadata management and data governance best practices. Technical Skills (Hard Skills): Expertise in Snowflake: Architecture design, performance tuning, cost optimization. Strong proficiency in SQL, Python, and scripting for data engineering tasks. Hands-on experience with ETL tools: Informatica, Talend, Apache NiFi, Matillion, or similar. Proficient in data modeling (dimensional, relational, star/snowflake schema). Good knowledge of Cloud Platforms: AWS, Azure, or GCP. Familiar with orchestration and workflow tools such as Apache Airflow, dbt, or DataOps frameworks. Experience with CI/CD tools and version control systems (e.g., Git). Knowledge of BI tools such as Tableau, Power BI, or Looker. Certifications (Preferred/Required): ✅ Snowflake SnowPro Core Certification – Required or Highly Preferred ✅ SnowPro Advanced Architect Certification – Preferred ✅ Cloud Certifications (e.g., AWS Certified Data Analytics – Specialty, Azure Data Engineer Associate) – Preferred ✅ ETL Tool Certifications (e.g., Talend, Matillion) – Optional but a plus Soft Skills: Strong analytical and problem-solving capabilities. Excellent communication and collaboration skills. Ability to translate technical concepts into business-friendly language. Proactive, detail-oriented, and highly organized. Capable of multitasking in a fast-paced, dynamic environment. Passionate about continuous learning and adopting new technologies. Why Join Us? Work on cutting-edge data platforms and cloud technologies Collaborate with industry leaders in analytics and digital transformation Be part of a data-first organization focused on innovation and impact Enjoy a flexible, inclusive, and collaborative work culture
Posted 5 days ago
0.0 - 5.0 years
0 Lacs
Chennai, Tamil Nadu
On-site
Designation: Senior Analyst Level: L2 Experience: 4 to 7 years Location: Chennai Job Description: We are seeking a highly skilled and motivated results-driven Senior Analyst with 4+ years of experience to join a fast-paced collaborative team at LatentView Analytics working in the financial services domain. Responsibilities: Drive measurement strategy and lead E2E process of A/B testing for areas of web optimization such as landing pages, user funnel, navigation, checkout, product lineup, pricing, search and monetization opportunities. Analyze web user behavior at both visitor and session level using clickstream data by anchoring to key web metrics and identify user behavior through engagement and pathing analysis Leverage AI/GenAI tools for automating tasks and building custom implementations Use data, strategic thinking and advanced scientific methods including predictive modeling to enable data-backed decision making for Intuit at scale Measure performance and impact of various product releases Demonstrate strategic thinking and systems thinking to solve business problems and influence strategic decisions using data storytelling. Partner with GTM, Product, Engineering, Design, Engineering teams to drive analytics projects end to end Build models to identify patterns in traffic and user behavior to inform acquisition strategies and optimize for business outcomes Skills: +5 years of experience working in web, product, marketing, or other related analytics fields to solve for marketing/product business problems +4 years of experience in designing and executing experiments (A/B and multivariate) with a deep understanding of the stats behind hypothesis testing Proficient in alternative A/B testing methods like DiD, Synthetic control and other causal inference techniques +5 years of technical proficiency in SQL, Python or R and data visualization tools like tableau +5 years of experience in manipulating and analyzing large complex datasets (e.g. clickstream data), constructing data pipelines (ETL) and working on big data technologies (e.g., Redshift, Spark, Hive, BigQuery) and solutions from cloud platforms and visualization tools like Tableau +3 years of experience in web analytics, analyzing website traffic patterns and conversion funnels +5 years of experience in building ML models (eg: regression, clustering, trees) for personalization applications Demonstrate ability to drive strategy, execution and insights for AI native experiences across the development lifecycle (ideation, discovery, experimentation, scaling) Outstanding communication skills with both technical and non-technical audiences Ability to tell stories with data, influence business decisions at a leadership level, and provide solutions to business problems Ability to manage multiple projects simultaneously to meet objectives and key deadlines Job Snapshot Updated Date 28-07-2025 Job ID J_3917 Location Chennai, Tamil Nadu, India Experience 4 - 7 Years Employee Type Permanent
Posted 5 days ago
5.0 years
0 Lacs
Greater Chennai Area
On-site
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : PySpark Good to have skills : Apache Spark Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. You will be responsible for ensuring that the applications are developed according to the specified requirements and are aligned with the business goals. Your typical day will involve collaborating with the team to understand the application requirements, designing and developing the applications using PySpark, and configuring the applications to meet the business process needs. You will also be responsible for testing and debugging the applications to ensure their functionality and performance. Roles & Responsibilities: - Expected to be an SME, collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Design and build applications using PySpark. - Configure applications to meet business process requirements. - Collaborate with the team to understand application requirements. - Test and debug applications to ensure functionality and performance. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark. - Good To Have Skills: Experience with Apache Spark. - Strong understanding of statistical analysis and machine learning algorithms. - Experience with data visualization tools such as Tableau or Power BI. - Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms. - Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information: - The candidate should have a minimum of 5 years of experience in PySpark. - This position is based at our Chennai office. - A 15 years full time education is required.
Posted 5 days ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production ready quality. Be able to apply GenAI models as part of the solution. Could also include but not limited to deep learning, neural networks, chatbots, image processing. Must have skills : Large Language Models Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an AI/ML Engineer, you will develop applications and systems utilizing AI tools, Cloud AI services, and GenAI models. Your role involves implementing deep learning, neural networks, chatbots, and image processing in production-ready quality solutions. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work-related problems. - Develop applications and systems using AI tools and Cloud AI services. - Implement deep learning and neural networks in solutions. - Create chatbots and work on image processing tasks. - Collaborate with team members to provide innovative solutions. - Stay updated with the latest AI/ML trends and technologies. Professional & Technical Skills: - Must To Have Skills: Proficiency in Large Language Models. - Strong understanding of statistical analysis and machine learning algorithms. - Experience with data visualization tools such as Tableau or Power BI. - Hands-on implementing various machine learning algorithms like linear regression, logistic regression, decision trees, and clustering algorithms. - Solid grasp of data munging techniques including data cleaning, transformation, and normalization. Additional Information: - The candidate should have a minimum of 3 years of experience in Large Language Models. - This position is based at our Bengaluru office. - A 15 years full-time education is required., 15 years full time education
Posted 5 days ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Project Role : Operations Engineer Project Role Description : Support the operations and/or manage delivery for production systems and services based on operational requirements and service agreement. Must have skills : Microsoft Windows Server Administration Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Operations Engineer, you will support the operations and/or manage delivery of production systems and services based on operational requirements and service agreement. Your typical day will involve ensuring the smooth functioning of production systems and services, addressing operational requirements, and adhering to service agreements. Roles & Responsibilities: 1. Windows Clustering Setup and Configuration Cluster Monitoring Failover management Resource management Vertical and Horizontal Scaling Troubleshoot issues 2. Windows storage management skills 3. Microsoft Windows Server Administration (OS Windows 2016, 2019, 2022) 4. Required active participation/contribution in team discussions. 5. Manage and monitor production systems to ensure optimal performance. 6. Maintain SLA. 7. Implement and maintain system configurations. 8. Inter and Intra team Collaborations for service delivery. 9. Document operational processes and procedures for future reference. Professional & Technical Skills: Strong knowledge of Windows Clusters hosted on Public and Private cloud infrastructures. Strong skill to read cluster logs, diagnose and resolve problems related to cluster communication, storage access, and application failover. Understanding of storage technologies and how to configure shared storage for a cluster. Operational Knowledge of Public Cloud Technologies – AWS / Azure / OCI Must Have Skills: Proficiency in Microsoft Windows Server Administration. Strong understanding of system administration principles. Experience with system monitoring and performance tuning. Knowledge of network protocols and security measures. Good To Have Skills: Experience with cloud platforms like Azure or AWS. Additional Information: The candidate should have a minimum of 3 years of experience in Microsoft Windows Server Administration. Team Player, Good Communication skills, Ability to multitask and adapt to shifting priorities, 24X7 A 15 year full-time education is required., 15 years full time education
Posted 5 days ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Avant de postuler à un emploi, sélectionnez votre langue de préférence parmi les options disponibles en haut à droite de cette page. Découvrez votre prochaine opportunité au sein d'une organisation qui compte parmi les 500 plus importantes entreprises mondiales. Envisagez des opportunités innovantes, découvrez notre culture enrichissante et travaillez avec des équipes talentueuses qui vous poussent à vous développer chaque jour. Nous savons ce qu’il faut faire pour diriger UPS vers l'avenir : des personnes passionnées dotées d’une combinaison unique de compétences. Si vous avez les qualités, de la motivation, de l'autonomie ou le leadership pour diriger des équipes, il existe des postes adaptés à vos aspirations et à vos compétences d'aujourd'hui et de demain. Fiche De Poste We are looking for an experienced and motivated Senior GCP Data Engineer to join our dynamic data team. In this role, you will be responsible for designing, building, and optimizing data pipelines, implementing advanced analytics solutions, and maintaining robust data infrastructure using Google Cloud Platform (GCP) services. You will play a key role in enabling data-driven decision-making and enhancing the performance and scalability of our data ecosystem. Key Responsibilities Design, implement, and optimize data pipelines using Google Cloud Platform (GCP) services, including Compute Engine, BigQuery, Cloud Pub/Sub, Dataflow, Cloud Storage, and AlloyDB. Lead the design and optimization of schema for large-scale data systems, ensuring data consistency, integrity, and scalability. Work closely with cross-functional teams to understand data requirements and deliver efficient, high-performance solutions. Design and execute complex SQL queries for BigQuery and other databases, ensuring optimal performance and efficiency. Implement efficient data processing workflows and streaming data solutions using Cloud Pub/Sub and Dataflow. Develop and maintain data models, schemas, and data marts to ensure consistency and scalability across datasets. Ensure the scalability, reliability, and security of cloud-based data architectures. Optimize cloud storage, compute, and query performance, driving cost-effective solutions. Collaborate with data scientists, analysts, and software engineers to create actionable insights and drive business outcomes. Implement best practices for data management, including governance, quality, and monitoring of data pipelines. Provide mentorship and guidance to junior data engineers and collaborate with them to achieve team goals. Required Qualifications Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent work experience). 5+ years of experience in data engineering, with a strong focus on Google Cloud Platform (GCP). Extensive hands-on experience with GCP Compute Engine, BigQuery, Cloud Pub/Sub, Dataflow, Cloud Storage, and AlloyDB. Strong expertise in SQL for query optimization and performance tuning in large-scale datasets. Solid experience in designing data schemas, data pipelines, and ETL processes. Strong understanding of data modeling techniques, and experience with schema design for both transactional and analytical systems. Proven experience optimizing BigQuery performance, including partitioning, clustering, and cost optimization strategies. Experience with managing and processing streaming data and batch data processing workflows. Knowledge of AlloyDB for managing transactional databases in the cloud and integrating them into data pipelines. Familiarity with data security, governance, and compliance best practices on GCP. Excellent problem-solving skills, with the ability to troubleshoot complex data issues and find efficient solutions. Strong communication and collaboration skills, with the ability to work with both technical and non-technical stakeholders. Preferred Qualifications Bachelor's/Master’s degree in Computer Science, Data Engineering, or a related field. Familiarity with infrastructure as code tools like Terraform or Cloud Deployment Manager. GCP certifications (e.g., Google Cloud Professional Data Engineer or Cloud Architect). Type De Contrat en CDI Chez UPS, égalité des chances, traitement équitable et environnement de travail inclusif sont des valeurs clefs auxquelles nous sommes attachés.
Posted 5 days ago
40.0 years
6 - 8 Lacs
Hyderābād
On-site
ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller, and longer. We discover, develop, manufacture, and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what is known today. ABOUT THE ROLE Amgen is seeking a Sr. Associate IS Business Systems Analyst with strong data science and analytics expertise to join the Digital Workplace Experience (DWX) Automation & Analytics product team. In this role, you will develop, maintain, and optimize machine learning models, forecasting tools, and operational dashboards that support strategic and day-to-day decisions for global digital workplace services. This role is ideal for candidates with hands-on experience building predictive models and working with large operational datasets to uncover insights and deliver automation solutions. You will work alongside product owners, engineers, and service leads to deliver measurable business value using data-driven tools and techniques. Roles and Responsibilities Design, develop, and maintain predictive models, decision support tools, and dashboards using Python, R, SQL, Power BI, or similar platforms. Partner with delivery teams to embed data science outputs into business operations, focusing on improving efficiency, reliability, and end-user experience in Digital Workplace services. Build and automate data pipelines for data ingestion, cleansing, transformation, and model training using structured and unstructured datasets. Monitor, maintain, and tune models to ensure accuracy, interpretability, and sustained business impact. Support efforts to operationalize ML models by working with data engineers and platform teams on integration and automation. Conduct data exploration, hypothesis testing, and statistical analysis to identify optimization opportunities across services like endpoint health, service desk operations, mobile technology, and collaboration platforms. Provide ad hoc and recurring data-driven recommendations to improve automation performance, service delivery, and capacity forecasting. Develop reusable components, templates, and frameworks that support analytics and automation scalability across DWX. Collaborate with other data scientists, analysts, and developers to implement best practices in model development and lifecycle management. What we expect of you We are all different, yet we all use our outstanding contributions to serve patients. The vital attribute professional we seek is with these qualifications. Basic Qualifications: Master's degree / Bachelor's degree and 5 to 9 years in Data Science, Computer Science, IT, or related field Must Have Skills Experience working with large-scale datasets in enterprise environments and with data visualization tools such as Power BI, Tableau, or equivalent Strong experience developing models in Python or R for regression, classification, clustering, forecasting, or anomaly detection Proficiency in SQL and working with relational and non-relational data sources Nice-to-Have Skills Familiarity with ML pipelines, version control (e.g., Git), and model lifecycle tools (MLflow, SageMaker, etc.) Understanding of statistics, data quality, and evaluation metrics for applied machine learning Ability to translate operational questions into structured analysis and model design Experience with cloud platforms (Azure, AWS, GCP) and tools like Databricks, Snowflake, or BigQuery Familiarity with automation tools or scripting (e.g., PowerShell, Bash, Airflow) Working knowledge of Agile/SAFe environments Exposure to ITIL practices or ITSM platforms such as ServiceNow Soft Skills Analytical mindset with attention to detail and data integrity Strong problem-solving and critical thinking skills Ability to work independently and drive tasks to completion Strong collaboration and teamwork skills Adaptability in a fast-paced, evolving environment Clear and concise documentation habits EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 5 days ago
14.0 years
0 Lacs
Bengaluru
On-site
At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. The opportunity C&I Strategy Insights Associate Director will be responsible for enabling business leaders understand C&I performance along with key drivers with actionable and impactful insights. This professional needs to be able to analyse data, derive patterns, infer insights in the lens of business context and generate easy to understand business narratives. As an Associate Director, the role also demands strategic foresight to align insights with broader business transformation goals for the Super regions working closely with Leadership. This role requires the ability to combine strong analytical skills and a strategic mindset with real-world perspective driven by an understanding of both clients’ issues and broader marketplace drivers. A collaborative mindset working across and through Europe West Super Region, Industry and SLs to identify growth enablers crucial to enable activation and growth. They must be adept at not only understanding and interpreting performance data but also at implementing solution-oriented strategies that drive business growth and innovation. Your key responsibilities Champion strategic insight initiatives that influence leadership decision-making across EuropeWest Drive alignment of C&I KPIs with strategic priorities and transformation goals Lead cross-functional collaboration with senior stakeholders to embed insights into go-to-market strategies Generate actionable Insights on C&I KPIs across revenue, sales & pipeline to Market and BD Leaders Build engaging and impactful presentations, and executive communications Identify growth opportunities through combination of internal and external sources Ability to articulate complex problems and processes to concise and simple ready to consume format Ability to use initiative, problem solving skills and to make appropriate recommendations at both an operational and strategic level Setup and oversee the governance, operations of data collation and reporting Build efficiencies, automation and standardization of data work flows Develop and maintain collaboration tools and portals to facilitate seamless and efficient operations. Provide baselines, targets and measure progress to goals. Based on insights, help Big Bet Leaders build and monitor the activation plan of Big Bet in strong alignment with Industries. Provide region oversight and leadership of Big Bet solutions Support Big Bet success stories are built via the EW client story initiative Support internal and external activation initiatives in joint with solution owners and Sector activation teams Skills and attributes for success Proven ability to influence senior leadership and drive consensus across diverse stakeholder groups Strong executive presence with the ability to represent insights at leadership forums and strategic reviews Experience in navigating complex matrix structures and enabling cross-border collaboration Create and validate hypotheses based on business objectives Identify key drivers of performance and analytical/problem solving skills Support leadership meetings and drive action Cross SR/SL/SSL/Industry Networking, team building and stakeholder management. Produce insightful analysis to assist leadership on decision-making Build deep understanding of stakeholders’ business and requirements based on business context Identify and resolve issues that impact delivery Manage and support initiatives, clarify objectives, priorities, scope changes and timelines Strong business writing skills, with the ability to create content independently with limited input Ability to balance work autonomously as well as integrate with other areas of the business Good time and priority management skills across multiple projects under tight deadlines Solution focused mindset to translate strategy into plans and execute them seamlessly High attention to detail To qualify for the role, you must have 14+ years of work experience Exposure with Big 4 or leading consulting firms is a plus Proven ability to manage complex processes and projects at a global level Demonstrated success in leading strategic programs or insight functions at a regional or global level. Experience working with or enabling leadership teams in super regions Agile program management experience Experience in professional services or similar industries Must have worked in one or more areas listed below: Operations Management & Excellence Project & Program Management Client Services & Relationship Management C-Suite & Leadership Enablement Graduate/Post-graduate in Operations, Business Administration / Management, Marketing Extensive experience working as a business analyst in a professional services environment, ideally with experience of revenue, sales and pipeline analysis Strong collaboration skills to enable teaming with other business functions Ideally, you’ll also have Ability to summarize business performance & drivers through easy to consume visuals/charts Map business problems to data and vice versa Data quality measurement and fix data issues Ability to embed external macro trends with internal performance and forecasts Familiarity with EUWest market dynamics and strategy frameworks Experience contributing to or shaping FY planning cycles through data-driven insights Technologies and Tools MS PPT for senior execs including visuals, charts Knowledge of ML (forecasting, clustering, driver analysis) is a plus Knowledge of using data visualization tools like Power BI, Tableau Knowledge of Project Management concepts and tools. What we look for Strong, confident in communication and articulation (verbal, written/charts) Analytical problem-solving skills Ability to break down business challenges into data driven use cases What we offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success, as defined by you : We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership : We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. About EY EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers to the complex issues facing our world today. If you can demonstrate that you meet the criteria above, please contact us as soon as possible. The exceptional EY experience. It’s yours to build. EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.
Posted 5 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough