Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
0 Lacs
Nashik, Maharashtra, India
Remote
Experience : 5.00 + years Salary : Confidential (based on experience) Expected Notice Period : 30 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 12 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - ANS Health) What do you need for this opportunity? Must have skills required: JSP (JavaServer Pages), Restful APIs, Spring Framework, Version control systems, Core Java, JDK 11, JDK 11/17 (or later versions), Wildfly, AWS, HTML, JavaScript, SQL ANS Health is Looking for: We are seeking a highly skilled and motivated Java/SQL Developer to join our remote team in India. This role is crucial for developing, enhancing, and maintaining our critical backend systems and database solutions. You will be an integral part of a global team, collaborating closely with US-based colleagues, necessitating a work schedule that aligns with US Eastern Time Zone business hours (until at least 4:00 PM ET). This is an excellent opportunity for a proactive and experienced developer who thrives in a remote setting and is passionate about delivering high-quality, high-performance software. Key Responsibilities Design, develop, test, deploy, and maintain robust and scalable Java-based applications. Work extensively with JDK 11/17, Wildfly Server, and EJB/JMS/ActiveMQ technologies to build enterprise-grade solutions. Develop, optimize, and manage complex SQL queries, stored procedures, functions, and database schemas. Collaborate with US-based development and product teams to understand requirements, define specifications, and deliver technical solutions. Participate in code reviews, ensuring adherence to coding standards, best practices, and performance guidelines. Troubleshoot, debug, and resolve issues in existing applications and database systems. Contribute to the full software development lifecycle, from concept and design to testing and deployment. Ensure timely delivery of high-quality software features and enhancements. Stay updated with emerging technologies and industry trends to recommend improvements and innovations. Primary Skills (Must-Have): 5+ years of hands-on experience in Java development. Strong proficiency with JDK 11/17 (or later versions), including modern Java features. Experience in deployment, configuration and optimization of an enterprise grade application server such as Wildfly (preferred) In-depth knowledge and practical experience with JMS (Java Message Service) or ActiveMQ-based application development (including producers, consumers & topics). Expertise in SQL, including writing complex queries, performance tuning, schema design, and working with relational databases (e.g., MS SQL Server (preferred), Oracle, MySQL, PostgreSQL). Secondary Skills (Good to Have): Experience with JSP (JavaServer Pages). Proficiency in HTML and JavaScript for front-end integration. Familiarity with version control systems (e.g., Git). Understanding of RESTful APIs and web services. Knowledge of Spring Framework is a plus. Work Schedule Requirements This role requires significant overlap with US Eastern Time Zone (ET) business hours. Candidates must be able to work a shift that allows for collaboration with US-based teams until at least 4:00 PM Eastern Time. This typically translates to a late afternoon/evening/night shift in India. Specific working hours will be discussed during the interview process to ensure mutual alignment. Qualifications Bachelor’s degree in computer science, Information Technology, or a related field. Proven track record of successfully delivering software projects. Excellent problem-solving and analytical skills. Strong verbal and written communication skills in English. Ability to work independently and as part of a distributed team. Self-motivated, proactive, and detail oriented. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 4 days ago
5.0 years
0 Lacs
Nagpur, Maharashtra, India
Remote
Experience : 5.00 + years Salary : Confidential (based on experience) Expected Notice Period : 30 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 12 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - ANS Health) What do you need for this opportunity? Must have skills required: JSP (JavaServer Pages), Restful APIs, Spring Framework, Version control systems, Core Java, JDK 11, JDK 11/17 (or later versions), Wildfly, AWS, HTML, JavaScript, SQL ANS Health is Looking for: We are seeking a highly skilled and motivated Java/SQL Developer to join our remote team in India. This role is crucial for developing, enhancing, and maintaining our critical backend systems and database solutions. You will be an integral part of a global team, collaborating closely with US-based colleagues, necessitating a work schedule that aligns with US Eastern Time Zone business hours (until at least 4:00 PM ET). This is an excellent opportunity for a proactive and experienced developer who thrives in a remote setting and is passionate about delivering high-quality, high-performance software. Key Responsibilities Design, develop, test, deploy, and maintain robust and scalable Java-based applications. Work extensively with JDK 11/17, Wildfly Server, and EJB/JMS/ActiveMQ technologies to build enterprise-grade solutions. Develop, optimize, and manage complex SQL queries, stored procedures, functions, and database schemas. Collaborate with US-based development and product teams to understand requirements, define specifications, and deliver technical solutions. Participate in code reviews, ensuring adherence to coding standards, best practices, and performance guidelines. Troubleshoot, debug, and resolve issues in existing applications and database systems. Contribute to the full software development lifecycle, from concept and design to testing and deployment. Ensure timely delivery of high-quality software features and enhancements. Stay updated with emerging technologies and industry trends to recommend improvements and innovations. Primary Skills (Must-Have): 5+ years of hands-on experience in Java development. Strong proficiency with JDK 11/17 (or later versions), including modern Java features. Experience in deployment, configuration and optimization of an enterprise grade application server such as Wildfly (preferred) In-depth knowledge and practical experience with JMS (Java Message Service) or ActiveMQ-based application development (including producers, consumers & topics). Expertise in SQL, including writing complex queries, performance tuning, schema design, and working with relational databases (e.g., MS SQL Server (preferred), Oracle, MySQL, PostgreSQL). Secondary Skills (Good to Have): Experience with JSP (JavaServer Pages). Proficiency in HTML and JavaScript for front-end integration. Familiarity with version control systems (e.g., Git). Understanding of RESTful APIs and web services. Knowledge of Spring Framework is a plus. Work Schedule Requirements This role requires significant overlap with US Eastern Time Zone (ET) business hours. Candidates must be able to work a shift that allows for collaboration with US-based teams until at least 4:00 PM Eastern Time. This typically translates to a late afternoon/evening/night shift in India. Specific working hours will be discussed during the interview process to ensure mutual alignment. Qualifications Bachelor’s degree in computer science, Information Technology, or a related field. Proven track record of successfully delivering software projects. Excellent problem-solving and analytical skills. Strong verbal and written communication skills in English. Ability to work independently and as part of a distributed team. Self-motivated, proactive, and detail oriented. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 4 days ago
5.0 years
0 Lacs
Kanpur, Uttar Pradesh, India
Remote
Experience : 5.00 + years Salary : Confidential (based on experience) Expected Notice Period : 30 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 12 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - ANS Health) What do you need for this opportunity? Must have skills required: JSP (JavaServer Pages), Restful APIs, Spring Framework, Version control systems, Core Java, JDK 11, JDK 11/17 (or later versions), Wildfly, AWS, HTML, JavaScript, SQL ANS Health is Looking for: We are seeking a highly skilled and motivated Java/SQL Developer to join our remote team in India. This role is crucial for developing, enhancing, and maintaining our critical backend systems and database solutions. You will be an integral part of a global team, collaborating closely with US-based colleagues, necessitating a work schedule that aligns with US Eastern Time Zone business hours (until at least 4:00 PM ET). This is an excellent opportunity for a proactive and experienced developer who thrives in a remote setting and is passionate about delivering high-quality, high-performance software. Key Responsibilities Design, develop, test, deploy, and maintain robust and scalable Java-based applications. Work extensively with JDK 11/17, Wildfly Server, and EJB/JMS/ActiveMQ technologies to build enterprise-grade solutions. Develop, optimize, and manage complex SQL queries, stored procedures, functions, and database schemas. Collaborate with US-based development and product teams to understand requirements, define specifications, and deliver technical solutions. Participate in code reviews, ensuring adherence to coding standards, best practices, and performance guidelines. Troubleshoot, debug, and resolve issues in existing applications and database systems. Contribute to the full software development lifecycle, from concept and design to testing and deployment. Ensure timely delivery of high-quality software features and enhancements. Stay updated with emerging technologies and industry trends to recommend improvements and innovations. Primary Skills (Must-Have): 5+ years of hands-on experience in Java development. Strong proficiency with JDK 11/17 (or later versions), including modern Java features. Experience in deployment, configuration and optimization of an enterprise grade application server such as Wildfly (preferred) In-depth knowledge and practical experience with JMS (Java Message Service) or ActiveMQ-based application development (including producers, consumers & topics). Expertise in SQL, including writing complex queries, performance tuning, schema design, and working with relational databases (e.g., MS SQL Server (preferred), Oracle, MySQL, PostgreSQL). Secondary Skills (Good to Have): Experience with JSP (JavaServer Pages). Proficiency in HTML and JavaScript for front-end integration. Familiarity with version control systems (e.g., Git). Understanding of RESTful APIs and web services. Knowledge of Spring Framework is a plus. Work Schedule Requirements This role requires significant overlap with US Eastern Time Zone (ET) business hours. Candidates must be able to work a shift that allows for collaboration with US-based teams until at least 4:00 PM Eastern Time. This typically translates to a late afternoon/evening/night shift in India. Specific working hours will be discussed during the interview process to ensure mutual alignment. Qualifications Bachelor’s degree in computer science, Information Technology, or a related field. Proven track record of successfully delivering software projects. Excellent problem-solving and analytical skills. Strong verbal and written communication skills in English. Ability to work independently and as part of a distributed team. Self-motivated, proactive, and detail oriented. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 4 days ago
3.0 - 5.0 years
0 Lacs
Thiruvananthapuram, Kerala, India
On-site
The world's top banks use Zafin's integrated platform to drive transformative customer value. Powered by an innovative AI-powered architecture, Zafin's platform seamlessly unifies data from across the enterprise to accelerate product and pricing innovation, automate deal management and billing, and create personalized customer offerings that drive expansion and loyalty. Zafin empowers banks to drive sustainable growth, strengthen their market position, and define the future of banking centered around customer value. What's the opportunity? We are looking for a FinOps Analyst to help design and implement financially optimized, scalable, and secure Azure cloud infrastructure. You will integrate FinOps principles, manage cost-effective cloud operations, and collaborate with engineering teams to ensure financial accountability and resource efficiency. Key Responsibilities Cloud Cost Monitoring & Analysis Monitor Azure cost dashboards, usage trends, and budget adherence across multiple subscriptions, accounts, and resource groups. Analyze granular cloud spend data and provide clear insights into resource-level consumption, highlighting trends, anomalies, and cost drivers. Identify unusual cost spikes, unused resources, and underutilized services; recommend optimization actions to improve cloud ROI. Work with engineering and infrastructure teams to align cloud usage with budgeted expectations and suggest tuning of misconfigured or inefficient resources. Reporting & Insights Generate regular reports, executive summaries, and visual dashboards on Azure spend, forecasting, and cost optimization metrics. Support the budgeting and forecasting process for cloud spend with usage-based analytics. Communicate findings and trends clearly to technical and non-technical stakeholders, including flags for areas of concern, overruns, or budget risks. Tools & Platforms Leverage Azure Cost Management and Billing, Azure Advisor, and related Microsoft tools for usage tracking and optimization recommendations. Explore and propose additional tools and scripts (e.g., Power BI, Cost Explorer APIs, or Excel-based automation) to enhance reporting and alerting capabilities. Cross-functional Support Collaborate with cloud operations, DevOps, and engineering teams to implement optimization strategies. Participate in regular cost review meetings and post-mortem analyses when unexpected cost behavior is observed. Required Skills & Qualifications 3 to 5 years of experience Basic to intermediate understanding of Microsoft Azure cloud infrastructure and services (IaaS, PaaS, tagging, subscription management). Hands-on experience with Azure Cost Management tools and dashboards. Proficiency in analyzing large datasets, identifying cost trends, and presenting actionable insights. Strong Excel skills, with comfort handling pivot tables, VLOOKUP/XLOOKUP, and charts. Analytical mindset with keen attention to detail and a proactive approach to problem-solving. Excellent verbal and written communication skills. Bachelor's degree in Finance, Computer Science, Engineering, or related field. Preferred Qualifications Exposure to FinOps principles or formal FinOps certification. Experience working with multi-cloud or large-scale enterprise Azure environments. Familiarity with automation or scripting for reporting purposes (e.g., PowerShell, Python, or Azure CLI). Experience with reporting tools like Power BI, Tableau, or Looker. What's in it for you Joining our team means being part of a culture that values diversity, teamwork, and high-quality work. We offer competitive salaries, annual bonus potential, generous paid time off, paid volunteering days, wellness benefits, and robust opportunities for professional growth and career advancement. Want to learn more about what you can look forward to during your career with us? Visit our careers site and our openings: zafin.com/careers Zafin welcomes and encourages applications from people with disabilities. Accommodations are available on request for candidates taking part in all aspects of the selection process. Zafin is committed to protecting the privacy and security of the personal information collected from all applicants throughout the recruitment process. The methods by which Zafin contains uses, stores, handles, retains, or discloses applicant information can be accessed by reviewing Zafin's privacy policy at https://zafin.com/privacy-notice/. By submitting a job application, you confirm that you agree to the processing of your personal data by Zafin described in the candidate privacy notice.
Posted 4 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Performance Monitoring & Analytics Consultant_Full-Time_Pune Job Title: Performance Monitoring & Analytics Consultant Job Type: Full-Time Location: Pune Experience: 5+ Years Educational Background – BE/ BTech Job Description: Must have skills: * Overall 5+ years of experience * 5+ years of exp on Dynatrace Administration. * 5+ years of exp in Any enterprise monitoring tool experience. * 3+ years of working experience on Automation toolsets. * Must have 5 plus years of experience in Application Performance Monitoring using enterprise standard tools. * Must have proven experience on Dynatrace SaaS implementation and migration of legacy monitoring solutions to the next-gen observability solution. * Prior experience must include 3+ years of experience working with agile scalable software Engineering. * Prior experience must include 3+ years of experience in CICD, automation, and DevOps practices * Must have knowledge in application architecture, OSI layers, and software design and development methodologies. * Proven diagnosis and tuning experience with Application, Middleware, and Infrastructure Components. * Prior experience working with business metrics reporting, customer experience monitoring, and optimization for digital products. * Scripting capabilities (Ansible, Shell, Bash, Perl, PowerShell, etc.) to execute monitoring tasks for custom requirements within the capabilities of the suite of monitoring tools.
Posted 4 days ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Description GPP Database Link (https://cummins365.sharepoint.com/sites/CS38534/) Job Summary Responsible for developing software programs per technical specifications following programming standards and procedures, performing testing, executing program modifications, and responding to problems by diagnosing and correcting errors in logic and coding. Key Responsibilities Applies secure coding and UI standards and best practices to develop, enhance, and maintain IT applications and programs. Assists with efforts to configures, analyzes, designs, develops, and maintains program code and applications. Performs unit testing and secure code testing, and issues resolution. Follow the process for source code management. Participate in integration, systems, and performance testing and tuning of code. Participates in peer secure code reviews. Harvest opportunities for re-usability of code, configurations, procedures, and techniques. Responsibilities Competencies: Action oriented - Taking on new opportunities and tough challenges with a sense of urgency, high energy, and enthusiasm. Balances stakeholders - Anticipating and balancing the needs of multiple stakeholders. Business insight - Applying knowledge of business and the marketplace to advance the organization’s goals. Drives results - Consistently achieving results, even under tough circumstances. Plans and aligns - Planning and prioritizing work to meet commitments aligned with organizational goals. Tech savvy - Anticipating and adopting innovations in business-building digital and technology applications. Performance Tuning - Conceptualizes, analyzes and solves application, database and hardware problems using industry standards and tools, version control, and build and test automation to meet business, technical, security, governance and compliance requirements. Programming - Creates, writes and tests computer code, test scripts, and build scripts using algorithmic analysis and design, industry standards and tools, version control, and build and test automation to meet business, technical, security, governance and compliance requirements. Solution Configuration - Configures, creates and tests a solution for commercial off-the-shelf (COTS) applications using industry standards and tools, version control, and build and test automation to meet business, technical, security, governance and compliance requirements. Solution Functional Fit Analysis - Composes and decomposes a system into its component parts using procedures, tools and work aides for the purpose of studying how well the component parts were designed, purchased and configured to interact holistically to meet business, technical, security, governance and compliance requirements. Solution Validation Testing - Validates a configuration item change or solution using the Function's defined best practices, including the Systems Development Life Cycle (SDLC) standards, tools and metrics, to ensure that it works as designed and meets customer requirements. Values differences - Recognizing the value that different perspectives and cultures bring to an organization. Education, Licenses, Certifications College, university, or equivalent degree in Computer Science, Information Technology, Business, or related subject, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Experience Intermediate level of relevant work experience required. 3-5 years of experience. Qualifications Candidate must have working hands on experience of more than 3 years as a Salesforce developer in Classic or Lightning Web Component. Able to migrate custom components to Lightning Web Component. Knowledge on Angular framework, Java. Code analysis tools to analyze custom codes and their efficiency Design solution on the principles of configuration and use of OOB features to ensure scalability of the org. Use Lightning Flows to Automate some of processing needs Bachelor’s degree - ideally in Computer Science, Engineering or MIS 4+ years of experience in Force.com/Lightning/LWC/Apex, CICD/COPADO/jenkins/DevOps is mandatory Experience with the Salesforce.com platform. Sales Cloud, Service Cloud, CPQ, Experience Cloud etc Experience with Lightning Pages, Visualforce, Triggers, SOQL, SOSL, API, Flows, LWC, Web Services (SOAP & REST) Salesforce Certified Platform Developer-I & II , Salesforce Certified App Builder" Proficiency in data manipulation and analysis using SQL. Experience with Angular framework/Java. Experience with data visualization tools like Tableau, Power BI, or similar. Good to have Airflow, Tableau Agile Methodologies and well versed with GUS/JIRA Strong communication skills to be able to communicate at all levels. Should have a proactive approach to problem-solving. Follow release management CI/CD code deployment process to migrate the code changes Attend the daily scrum calls and working in Global model Familiarity with Javascript, CSS, Splunk Analytics Visual Studio Code/GitHub/Versioning/Packaging Job Systems/Information Technology Organization Cummins Inc. Role Category Hybrid Job Type Exempt - Experienced ReqID 2417818 Relocation Package Yes
Posted 4 days ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Description GPP Database Link (https://cummins365.sharepoint.com/sites/CS38534/) Job Summary Responsible for developing software programs per technical specifications following programming standards and procedures, performing testing, executing program modifications, and responding to problems by diagnosing and correcting errors in logic and coding. Key Responsibilities Applies secure coding and UI standards and best practices to develop, enhance, and maintain IT applications and programs. Assists with efforts to configures, analyzes, designs, develops, and maintains program code and applications. Performs unit testing and secure code testing, and issues resolution. Follow the process for source code management. Participate in integration, systems, and performance testing and tuning of code. Participates in peer secure code reviews. Harvest opportunities for re-usability of code, configurations, procedures, and techniques. Responsibilities Competencies: Action oriented - Taking on new opportunities and tough challenges with a sense of urgency, high energy, and enthusiasm. Balances stakeholders - Anticipating and balancing the needs of multiple stakeholders. Business insight - Applying knowledge of business and the marketplace to advance the organization’s goals. Drives results - Consistently achieving results, even under tough circumstances. Plans and aligns - Planning and prioritizing work to meet commitments aligned with organizational goals. Tech savvy - Anticipating and adopting innovations in business-building digital and technology applications. Performance Tuning - Conceptualizes, analyzes and solves application, database and hardware problems using industry standards and tools, version control, and build and test automation to meet business, technical, security, governance and compliance requirements. Programming - Creates, writes and tests computer code, test scripts, and build scripts using algorithmic analysis and design, industry standards and tools, version control, and build and test automation to meet business, technical, security, governance and compliance requirements. Solution Configuration - Configures, creates and tests a solution for commercial off-the-shelf (COTS) applications using industry standards and tools, version control, and build and test automation to meet business, technical, security, governance and compliance requirements. Solution Functional Fit Analysis - Composes and decomposes a system into its component parts using procedures, tools and work aides for the purpose of studying how well the component parts were designed, purchased and configured to interact holistically to meet business, technical, security, governance and compliance requirements. Solution Validation Testing - Validates a configuration item change or solution using the Function's defined best practices, including the Systems Development Life Cycle (SDLC) standards, tools and metrics, to ensure that it works as designed and meets customer requirements. Values differences - Recognizing the value that different perspectives and cultures bring to an organization. Education, Licenses, Certifications College, university, or equivalent degree in Computer Science, Information Technology, Business, or related subject, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Experience Intermediate level of relevant work experience required. 3-5 years of experience. Qualifications Candidate must have working hands on experience of more than 3 years as a Salesforce developer in Classic or Lightning Web Component. Able to migrate custom components to Lightning Web Component. Knowledge on Angular framework, Java. Code analysis tools to analyze custom codes and their efficiency Design solution on the principles of configuration and use of OOB features to ensure scalability of the org. Use Lightning Flows to Automate some of processing needs Bachelor’s degree - ideally in Computer Science, Engineering or MIS 6+ years of experience in Force.com/Lightning/LWC/Apex, CICD/COPADO/jenkins/DevOps is mandatory Experience with the Salesforce.com platform. Sales Cloud, Service Cloud, CPQ, Experience Cloud etc Experience with Lightning Pages, Visualforce, Triggers, SOQL, SOSL, API, Flows, LWC, Web Services (SOAP & REST) Salesforce Certified Platform Developer-I & II , Salesforce Certified App Builder" Proficiency in data manipulation and analysis using SQL. Experience with Angular framework/Java. Experience with data visualization tools like Tableau, Power BI, or similar. Good to have Airflow, Tableau Agile Methodologies and well versed with GUS/JIRA Strong communication skills to be able to communicate at all levels. Should have a proactive approach to problem-solving. Follow release management CI/CD code deployment process to migrate the code changes Attend the daily scrum calls and working in Global model Familiarity with Javascript, CSS, Splunk Analytics Visual Studio Code/GitHub/Versioning/Packaging Job Systems/Information Technology Organization Cummins Inc. Role Category Hybrid Job Type Exempt - Experienced ReqID 2417820 Relocation Package Yes
Posted 4 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Assoicate AIML Engineer– Global Data Analytics, Technology (Maersk) This position will be based in India – Bangalore/Pune A.P. Moller - Maersk A.P. Moller – Maersk is the global leader in container shipping services. The business operates in 130 countries and employs 80,000 staff. An integrated container logistics company, Maersk aims to connect and simplify its customers’ supply chains. Today, we have more than 180 nationalities represented in our workforce across 131 Countries and this mean, we have elevated level of responsibility to continue to build inclusive workforce that is truly representative of our customers and their customers and our vendor partners too. We are responsible for moving 20 % of global trade & is on a mission to become the Global Integrator of Container Logistics. To achieve this, we are transforming into an industrial digital giant by combining our assets across air, land, ocean, and ports with our growing portfolio of digital assets to connect and simplify our customer’s supply chain through global end-to-end solutions, all the while rethinking the way we engage with customers and partners. The Brief In this role as an Associate AIML Engineer on the Global Data and Analytics (GDA) team, you will support the development of strategic, visibility-driven recommendation systems that serve both internal stakeholders and external customers. This initiative aims to deliver actionable insights that enhance supply chain execution, support strategic decision-making, and enable innovative service offerings. Data AI/ML (Artificial Intelligence and Machine Learning) Engineering involves the use of algorithms and statistical models to enable systems to analyse data, learn patterns, and make data-driven predictions or decisions without explicit human programming. AI/ML applications leverage vast amounts of data to identify insights, automate processes, and solve complex problems across a wide range of fields, including healthcare, finance, e-commerce, and more. AI/ML processes transform raw data into actionable intelligence, enabling automation, predictive analytics, and intelligent solutions. Data AI/ML combines advanced statistical modelling, computational power, and data engineering to build intelligent systems that can learn, adapt, and automate decisions. What I'll be doing – your accountabilities? Build and maintain machine learning models for various applications, such as natural language processing, computer vision, and recommendation systems Perform exploratory data analysis (EDA) to identify patterns and trends in data Clean, preprocess, perform hyperparameter tuning and analyze large datasets to prepare them for AI/ML model training Build, test, and optimize machine learning models and experiment with algorithms and frameworks to improve model performance Use programming languages, machine learning frameworks and libraries, algorithms, data structures, statistics and databases to optimize and fine-tune machine learning models to ensure scalability and efficiency Learn to define user requirements and align solutions with business needs Work on AI/ML engineering projects, perform feature engineering and collaborate with teams to understand business problems Learn best practices in data / AI/ML engineering and performance optimization Contribute to research papers and technical documentation Contribute to project documentation and maintain data quality standards Foundational Skills Understands Programming skills beyond the fundamentals and can demonstrate this skill in most situations without guidance. Understands the below skills beyond the fundamentals and can demonstrate in most situations without guidance AI & Machine Learning Data Analysis Machine Learning Pipelines Model Deployment Specialized Skills To be able to understand beyond the fundamentals and can demonstrate in most situations without guidance for the following skills: Deep Learning Statistical Analysis Data Engineering Big Data Technologies Natural Language Processing (NPL) Data Architecture Data Processing Frameworks Proficiency in Python programming. Proficiency in Python-based statistical analysis and data visualization tool While having limited understanding of Technical Documentation but are focused on growing this skill Qualifications & Requirements BSc/MSc/PhD in computer science, data science or related discipline with 5+ years of industry experience building cloud-based ML solutions for production at scale, including solution architecture and solution design experience Good problem solving skills, for both technical and non-technical domains Good broad understanding of ML and statistics covering standard ML for regression and classification, forecasting and time-series modeling, deep learning 3+ years of hands-on experience building ML solutions in Python, incl knowledge of common python data science libraries (e.g. scikit-learn, PyTorch, etc) Hands-on experience building end-to-end data products based on AI/ML technologies Some experience with scenario simulations. Experience with collaborative development workflow: version control (we use github), code reviews, DevOps (incl automated testing), CI/CD Team player, eager to collaborate and good collaborator Preferred Experiences In addition to basic qualifications, would be great if you have… Hands-on experience with common OR solvers such as Gurobi Experience with a common dashboarding technology (we use PowerBI) or web-based frontend such as Dash, Streamlit, etc. Experience working in cross-functional product engineering teams following agile development methodologies (scrum/Kanban/…) Experience with Spark and distributed computing Strong hands-on experience with MLOps solutions, including open-source solutions. Experience with cloud-based orchestration technologies, e.g. Airflow, KubeFlow, etc Experience with containerization (Kubernetes & Docker) As a performance-oriented company, we strive to always recruit the best person for the job – regardless of gender, age, nationality, sexual orientation or religious beliefs. We are proud of our diversity and see it as a genuine source of strength for building high-performing teams. Maersk is committed to a diverse and inclusive workplace, and we embrace different styles of thinking. Maersk is an equal opportunities employer and welcomes applicants without regard to race, colour, gender, sex, age, religion, creed, national origin, ancestry, citizenship, marital status, sexual orientation, physical or mental disability, medical condition, pregnancy or parental leave, veteran status, gender identity, genetic information, or any other characteristic protected by applicable law. We will consider qualified applicants with criminal histories in a manner consistent with all legal requirements. We are happy to support your need for any adjustments during the application and hiring process. If you need special assistance or an accommodation to use our website, apply for a position, or to perform a job, please contact us by emailing accommodationrequests@maersk.com.
Posted 4 days ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Description - External Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibility: Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Qualifications - External Required Qualifications: B.E/B.Tech/M.Tech/MCA Ability to address the upcoming deliverables/work orders/tickets independently. For this hands on below technologies Solid Hands on Java 17 and above Solid Hands on Spring, Spring Boot, Hibernate, JSF and ReactJS Hands on Web services (REST/Micro services/API) Hands on Unix scripting Hands on RDBMS DB like Oracle, SQL & DB2 Hands on NoSQL DB preferably MongoDB Hands on Kafka messaging services Hands on eclipse or STS Hands on JBoss and WAS Hands on Cloud (Preferred GCP) Hands on DevOps Ability to work on GitHub Actions/Dev ops model Solid analytical, debugging and performance tuning skills Ability to Interact with business. Hence, good communication skill set Preferred Qualifications: Knowledge on Grafana, Elastic APM Knowledge on Cucumber Knowledge on Kubernetes
Posted 4 days ago
7.0 years
25 - 35 Lacs
Chennai, Tamil Nadu, India
On-site
Company: Accelon Website: Visit Website Business Type: Small/Medium Business Company Type: Product & Service Business Model: B2B Funding Stage: Pre-seed Industry: FinTech Salary Range: ₹ 25-35 Lacs PA Job Description This is a permanent role with a product based global fintech company - A Valued client Accelon inc. Required Skills Java, Spring Boot and REST Oracle DB Good knowledge of data structures and algorithm concepts At least 7 years of experience in software product development. Bachelor/ Master degree in Computer Science, Engineering, closely related quantitative discipline. Expertise in online payments and related domains is a plus. Requirements Strong skills in Java, Scala, Spark & Raptor and OO-based design and development. Strong skills in Spring Boot, Hibernate, REST, Maven, GitHub, and other open-source Java libraries. Excellent problem-solving abilities and strong understanding of software development/ delivery lifecycle. Proven track record working with real-world projects and delivering complex software projects from concept to production, with a focus on scalability, reliability, and performance. Good knowledge of data structures and algorithm concepts, as well as database design, tuning and query optimization. Strong debugging and problem resolution skills and focus on automation, and test-driven development. Ability to work in a fast paced, iterative development environment. Hands on development experience using JAVA, Spring Core and Spring Batch. Deep understanding of and extensive experience applying advanced object-oriented design and development principles. Experience developing data-driven applications using an industry standard RDBMS (Oracle, etc.), including strong data architecture and SQL development skills. Knowledge on data modelling skills with relational databases, elastic search (Kibana), Hadoop. Experience with REST API’s, Web Services, JMS, Unit Testing and build tools. Responsibilities Team member will be expected to adhere to SDLC process and interact with the team on a daily basis. Develops efficient, elegant, clean, reusable code with no unnecessary complication or abstraction. Manages workload and other assignments efficiently while being able to resolve time-critical situations reliably and professionally. Work with various PD teams on integration and post-integration (live) issues. Engage in the automation of daily activities that drive operational excellence and ensure highly productive operating procedures. Weekend and after-hours support are required for BCDC products and applications on the live site, on a rotating schedule.
Posted 4 days ago
5.0 - 12.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
Data Software Engineer Location:Chennai and Coimbatore Mode:Hybrid Interview:Walkin 5-12 Years of in Big Data & Data related technology experience Expert level understanding of distributed computing principles Expert level knowledge and experience in Apache Spark Hands on programming with Python Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop Experience with building stream-processing systems, using technologies such as Apache Storm or Spark-Streaming Good understanding of Big Data querying tools, such as Hive, and Impala Experience with integration of data from multiple data sources such as RDBMS (SQL Server, Oracle), ERP, Files Good understanding of SQL queries, joins, stored procedures, relational schemas Experience with NoSQL databases, such as HBase, Cassandra, MongoDB Knowledge of ETL techniques and frameworks Performance tuning of Spark Jobs Experience with AZURE Databricks Ability to lead a team efficiently Experience with designing and implementing Big data solutions Practitioner of AGILE methodology
Posted 4 days ago
0 years
0 Lacs
Bhubaneswar, Odisha, India
On-site
About Us At Rhythm, our values form the foundation of our business. We are passionate about customer success, innovation, and our employees. They guide our actions, decisions, and interactions, ensuring that we consistently make a positive impact on the world around us. Job Description Rhythm Innovations is a leading product development company specializing in Risk and Compliance Management solutions. We are committed to delivering innovative and user centric technologies that empower businesses to navigate regulatory landscapes with confidence and efficiency. Our culture fosters creativity, collaboration, and excellence, making us a hub for talented individuals passionate about shaping the future of risk and compliance management. Rhythm Innovations is looking for an experienced and visionary Salesforce AI Architect to drive the design, development, and deployment of cutting-edge AI solutions that power our supply chain risk management and other innovative platforms. This role will play a pivotal part in enabling our customers to harness the power of AI for actionable insights, operational efficiency, and strategic decision-making. Requirements Role & Responsibilities: Salesforce AI Model Building: Design, configure, and optimize Salesforce Einstein models (Predictive models, Generative models). Fine-tune Salesforce native AI models based on CRM data, including Sales, Service, and Marketing Cloud data. Work with Salesforce Data Cloud to prepare high-quality datasets for AI use. Prompt Engineering: Develop, test, and refine effective prompts for Salesforce Einstein GPT and custom GPT models (Flows, Apex code generation, knowledge article generation). Build a prompt library for reusable AI-assisted solutions across sales, service, marketing, and dev teams. Einstein for Developers & AI Tool Utilization: Use Einstein for Developers (pilot or GA) for Apex, Flow, and LWC code generation. Implement AI-driven solutions for automating repetitive development tasks (e.g., Flow GPT, Apex GPT). Integrate Salesforce AI tools with DevOps pipelines where possible. Innovation Initiatives: Collaborate with required stakeholders to design Proofs of Concept (PoCs) using AI (new apps, CRM features, automation bots). Evaluate new Salesforce AI releases and recommend pilots/adoption (Einstein Copilot, Prompt Studio, Model Builder). Educate internal teams on prompt best practices and AI ethics. Required Skills Strong experience with Salesforce platform (Sales Cloud, Service Cloud, Experience Cloud, Apex, Flows, LWC). Hands-on with Einstein GPT, Prompt Studio, and/or Einstein Prediction Builder. Good understanding of AI/ML concepts (fine-tuning, large language models, bias, training data). Familiarity with Salesforce metadata structure, API integration, and Data Cloud (CDP). Experience using tools like ChatGPT, Copilot, FlowGPT, or Salesforce CodeGen is a plus. Excellent written and verbal communication for prompt writing and documentation. Preferred Qualifications Salesforce Certifications: Salesforce Certified Administrator Salesforce Certified Platform Developer I Salesforce Certified AI Associate Salesforce Einstein Analytics and Discovery Consultant (preferred) Background in AI/ML engineering, NLP, data science, or related fields (bonus). check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#6875E2;border-color:#6875E2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered="">
Posted 4 days ago
4.0 - 8.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Senior SOC Analyst / Administrator Location: [Insert Location] – Willingness to work in a 24x7 rotational shift environment Industry: Information Technology | BFSI | Fintech Experience Required: 4 to 8 years in Security Operations Center (SOC) with strong SOC analysis and administration skills Job Summary: We are looking for an experienced and dedicated SOC Analyst/Admin to join our 24x7 Cyber Security Operations team. The ideal candidate will have deep expertise in SIEM platforms (preferably ArcSight and IBM QRadar), strong analytical capabilities in threat detection and incident response, and a solid background in cyber defense operations. Key Responsibilities: Administer, maintain, and troubleshoot SIEM solutions (ArcSight, IBM QRadar). Perform real-time security monitoring and incident response across enterprise-wide environments. Analyze and investigate security alerts from tools including IDPS, SIEM, antivirus, EDR, UBA, and proxy systems. Build and enhance detection use cases, perform false-positive tuning, and implement threat-hunting initiatives. Actively support and manage Data Loss Prevention (DLP), Threat Intelligence, and Vulnerability Management activities. Participate in and enhance the full incident response lifecycle: detection, triage, containment, eradication, and recovery. Draft high-quality incident reports for high-severity events and contribute to root cause analysis. Develop and maintain SOPs, IR runbooks, and SOAR playbooks. Collaborate with internal teams and third-party vendors to resolve complex issues. Ensure high availability and performance of SOC infrastructure. Respond to Service Requests (SRs), Change Requests (CRs), and daily operations queries. Lead or support projects related to security tooling, automation, and process improvements. Key Skills & Qualifications: 4–8 years of experience in a SOC environment with a blend of analysis and SIEM administration. Strong experience with SIEM tools such as ArcSight and IBM QRadar (configuration, tuning, maintenance). Deep understanding of cybersecurity concepts including threat detection, malware analysis, network security, and endpoint security. Familiarity with threat intelligence platforms, DLP systems, and vulnerability scanning tools. Strong understanding of TCP/IP, common protocols, and the MITRE ATT&CK framework. Excellent troubleshooting and analytical thinking abilities. Strong documentation and communication skills. Preferred Certifications (Added Advantage): CEH (Certified Ethical Hacker) CTIA (Certified Threat Intelligence Analyst) CISM (Certified Information Security Manager) CCNA (Cisco Certified Network Associate) CND (Certified Network Defender) Work Environment: 24x7 shift-based work; must be open to working in night and weekend shifts as part of a rotating schedule. Fast-paced, highly collaborative security operations environment. Why Join Us? Work with cutting-edge cybersecurity technologies Engage in real-time threat defense and mitigation Opportunity to grow within a dynamic SOC team with continuous learning Let me know if you’d like a version customized for a particular company brand or formatted for a PDF/LinkedIn job post.
Posted 4 days ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Apna: Apna is India's largest jobs and professional networking platform for frontline workers. We're building the infrastructure to power hiring, skill-building, and career growth for 300 million+ working Indians. As we expand our AI-first platform across voice, text, and multimodal workflows — we're looking for a bold and curious AI Data Scientist who wants to shape the future of applied Gen AI. Requirement: 1 Location: Bengaluru (Work from Office - Domlur) Team: AI & Machine Learning Experience: 3-5 years Requirements What You'll do: Fine-tune and deploy LLMs, TTS, STT, and voice models for use in real-time conversations with millions of users Convert unstructured, messy real-world audio/text data into clean, high-quality datasets for training and evaluation Build inference pipelines optimized for low-latency, high-accuracy voice agents and multimodal interfaces Work closely with infra and product teams to ship production-grade GenAI models with observability, fallback, and monitoring Experiment with GANs, diffusion models, audio generation, and multimodal fusion to power next-gen AI agents Own the full model lifecycle — from research and training to deployment, testing, and iteration. What we're Looking for: 3-5 years of hands-on experience in AI / ML roles, ideally in startups or product-driven teams. Strong grasp of LLM fine-tuning, instruction tuning, or pretraining techniques Familiarity with TTS/STT systems, Whisper, Tacotron, VITS, or commercial tools like ElevenLabs Experience with multimodal architectures, generative audio, GANs, or diffusion-based models Ability to work with real-world messy data, design training pipelines, and debug model failure modes Fluency in frameworks like PyTorch, HuggingFace, TensorFlow, and ecosystem tools (ONNX, Triton, LangChain, etc.) Passion for building high-impact AI features that ship to real customers Benefits Why Join Us: Work at the cutting edge of LLMs, voice AI, and generative models — and ship real products, not just prototypes Directly impact millions of users by powering AI agents that help with hiring, learning, and career growth Collaborate with a world-class team of AI engineers, researchers, and product minds who move fast and ship boldly Freedom to explore: Own experiments, propose architecture, or contribute to foundational model training Startup speed, enterprise scale — best of both worlds. Rapid iteration and direct customer feedback Multilingual India - first problems that push the boundaries of speech, reasoning, and personalization
Posted 4 days ago
4.0 - 8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
We help the world run better At SAP, we enable you to bring out your best. Our company culture is focused on collaboration and a shared passion to help the world run better. How? We focus every day on building the foundation for tomorrow and creating a workplace that embraces differences, values flexibility, and is aligned to our purpose-driven and future-focused work. We offer a highly collaborative, caring team environment with a strong focus on learning and development, recognition for your individual contributions, and a variety of benefit options for you to choose from. About The Team At SAP Procurement Product Team, our mission is to provide the world’s leading cloud-based spend management solution, unifying the SAP Procurement portfolio across SAP S/4HANA, SAP Ariba Applications & Business Network, and SAP Fieldglass. To strengthen our team further, we are looking for a skilled Developer to join our 'Procurement for Public Sector' product engineering team in Bangalore. SAP Procurement for Public Sector is a private cloud product catering to wholistic procurement needs for large public sector agencies and goverment functions, globally. Role Work as a full stack developer for development of state-of-the-art software applications in S/4HANA Private cloud. Demonstrate responsibility for all tasks and ensure completion with good quality, in time delivery and efficiency Apply clean code principles: execute code reviews, code inspections, unit testing, performance measurements and other quality tasks Perform development tasks in a self-reliant way Work closely with Architect, Senior Developers and other stakeholders to achieve effective design and code reviews Author and execute effective automation tests Author software design and technical documentatio Role Requirements 4-8 years of experience in software development & strong educational qualifications (Bachelor’s degree in Engineering or MCA from reputed institutes) In depth programming background and excellent technical skills in ABAP OO, ABAP Core Data Services (CDS)/ OData / RAP / HANA Knowledge of automation test frameworks like Vyper/OPA5/QUnits is desirable Strong knowledge in SAPUI5/Fiori Exposure to agile development methodologies like Scrum Experience or functional knowledge in Procurement / SAP MM / SRM is a plus Knowledge/Experience in performance tuning in HANA CDS, Analytical application development using KPI, ALP is a plus Ability to work effectively in a fast paced and changing business environmen Developer (T2) SAPInternalT2 Bring out your best SAP innovations help more than four hundred thousand customers worldwide work together more efficiently and use business insight more effectively. Originally known for leadership in enterprise resource planning (ERP) software, SAP has evolved to become a market leader in end-to-end business application software and related services for database, analytics, intelligent technologies, and experience management. As a cloud company with two hundred million users and more than one hundred thousand employees worldwide, we are purpose-driven and future-focused, with a highly collaborative team ethic and commitment to personal development. Whether connecting global industries, people, or platforms, we help ensure every challenge gets the solution it deserves. At SAP, you can bring out your best. We win with inclusion SAP’s culture of inclusion, focus on health and well-being, and flexible working models help ensure that everyone – regardless of background – feels included and can run at their best. At SAP, we believe we are made stronger by the unique capabilities and qualities that each person brings to our company, and we invest in our employees to inspire confidence and help everyone realize their full potential. We ultimately believe in unleashing all talent and creating a better and more equitable world. SAP is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to the values of Equal Employment Opportunity and provide accessibility accommodations to applicants with physical and/or mental disabilities. If you are interested in applying for employment with SAP and are in need of accommodation or special assistance to navigate our website or to complete your application, please send an e-mail with your request to Recruiting Operations Team: Careers@sap.com For SAP employees: Only permanent roles are eligible for the SAP Employee Referral Program, according to the eligibility rules set in the SAP Referral Policy. Specific conditions may apply for roles in Vocational Training. EOE AA M/F/Vet/Disability Qualified applicants will receive consideration for employment without regard to their age, race, religion, national origin, ethnicity, age, gender (including pregnancy, childbirth, et al), sexual orientation, gender identity or expression, protected veteran status, or disability. Successful candidates might be required to undergo a background verification with an external vendor. Requisition ID: 430844 | Work Area: Software-Design and Development | Expected Travel: 0 - 10% | Career Status: Professional | Employment Type: Regular Full Time | Additional Locations: .
Posted 4 days ago
8.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
We are looking for a proactive and technically strong Tableau Administrator with 6 – 8 years of relevant experience to manage, optimize, and support our Tableau Server environment. This role is essential to ensure reliable BI operations, secure user management, performance optimization, and smooth integration with enterprise data sources. Key Responsibilities Install, configure, and maintain Tableau Server for optimal availability and performance Monitor server health and proactively resolve performance issues Manage user roles, groups, projects, and access permissions Perform version upgrades, patches, and platform maintenance Support dashboard deployment and collaboration with developers and analysts Integrate Tableau with various data sources and authentication platforms Automate admin tasks using scripting languages like PowerShell or Python Maintain clear documentation of configurations, policies, and best practices Stay current with Tableau updates and recommend feature adoption Required Skills 3 years of hands-on experience in Tableau Server Administration In-depth knowledge of Tableau architecture, performance tuning, and security Experience with SQL and data source configuration Proficiency in scripting (PowerShell, Python, or Bash) Familiarity with content migration, backup, and restore processes Strong problem-solving and communication skills Ability to work in a client-facing, cross-functional environment Nice to Have Tableau Server or Desktop Certification Experience with cloud Tableau hosting (AWS/Azure) Knowledge of DevOps/CI-CD tools and deployment automation Experience supporting Tableau in large-scale enterprise environments Why Join Us? Be part of a collaborative BI team working on impactful enterprise analytics solutions. We offer a supportive environment, growth-focused learning culture, and the chance to work on innovative data-driven projects that make a real difference. Job Details Employment: Full-time Location: Ahmedabad Experience Required: 6–8 Years Industry: IT Services and Consulting Job Role: Tableau Administrator
Posted 4 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Department: Information Technology Location: APAC-India-IT Delivery Center Hyderabad Description Essential Duties and Responsibilities: Develop and maintain data pipelines using Azure native services like ADLS Gen 2, Azure Data Factory, Synapse, Spark, Python, Databricks and AWS Cloud services, Databurst Develop Datasets require for Business Analytics in Power BI and Azure Data Warehouse. Ensure software development principles, standards, and best practices are followed Maintain existing applications and provide operational support. Review and analyze user requirement and write system specifications Ensure quality design, delivery, and adherence to corporate standards. Participate in daily stand-ups, reviews, design sessions and architectural discussion. Other duties may be assigned Role expectations Essential Duties And Responsibilities Develop and maintain data pipelines using Azure native services like ADLS Gen 2, Azure Data Factory, Synapse, Spark, Python, Databricks and AWS Cloud services, Databurst Develop Datasets require for Business Analytics in Power BI and Azure Data Warehouse. Ensure software development principles, standards, and best practices are followed Maintain existing applications and provide operational support. Review and analyze user requirement and write system specifications Ensure quality design, delivery, and adherence to corporate standards. Participate in daily stand-ups, reviews, design sessions and architectural discussion. Other duties may be assigned What We're Looking For Required Qualifications and Skills: 5+yrs Experience in solution delivery for Data Analytics to get insights for various departments in Organization. 5+yrs Experience in delivering solutions using Microsoft Azure Platform or AWS Services with emphasis on data solutions and services. Extensive knowledge on writing SQL queries and experience in performance tuning queries Experience developing software architectures and key software components Proficient in one or more of the following programming languages: C#, Java, Python, Scala, and related open-source frameworks. Understanding of data services including Azure SQL Database, Data Lake, Databricks, Data Factory, Synapse Data modeling experience on Azure DW/ AWS , understanding of dimensional model , star schemas, data vaults Quick learner who is passionate about new technologies. Strong sense of ownership, customer obsession, and drive with a can-do attitude. Team player with great communication skills--listening, speaking, reading, and writing--in English BS in Computer Science, Computer Engineering, or other quantitative fields such as Statistics, Mathematics, Physics, or Engineering. Applicant Privacy Policy Review our Applicant Privacy Policy for additional information. Equal Opportunity Statement Align Technology is an equal opportunity employer. We are committed to providing equal employment opportunities in all our practices, without regard to race, color, religion, sex, national origin, ancestry, marital status, protected veteran status, age, disability, sexual orientation, gender identity or expression, or any other legally protected category. Applicants must be legally authorized to work in the country for which they are applying, and employment eligibility will be verified as a condition of hire.
Posted 4 days ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
🚀 Senior Tableau Administrator Location : Ahmedabad / Hyderabad Experience : 6+ Years | Job Type : Full-Time Looking for Early Joiners Only Role Overview As a Senior Tableau Administrator , you will lead the administration, governance, and optimization of our enterprise Tableau environment. You will play a key role in managing high-availability Tableau Server deployments, collaborating with cross-functional teams, and ensuring secure, seamless access to dashboards and data across the organization. Key Responsibilities Tableau Server Management : Install, configure, and manage multi-node Tableau Server clusters; monitor health and performance metrics. Security & Governance : Implement RBAC, user authentication, and content security policies aligned with organizational standards. Performance Optimization : Troubleshoot server, extract refresh, and dashboard performance issues; fine-tune configurations for reliability and speed. Integration & Automation : Integrate Tableau with data sources like Snowflake, SQL Server, AD; automate admin tasks using REST API, PowerShell, or Python. Platform Upgrades : Lead Tableau Server upgrades, patch management, and migration initiatives. User Support & Enablement : Provide L2/L3 support, lead onboarding sessions, and drive user adoption across business units. Documentation : Maintain architecture diagrams, SOPs, and platform usage guidelines. Innovation : Stay current with Tableau’s roadmap, new features, and recommend enhancements for platform scalability and usage. Required Skills 6+ years of experience administering Tableau Server in enterprise environments. Strong understanding of Tableau architecture , including clustering, load balancing, and external authentication. Proficient in scripting (PowerShell, Python) and Tableau REST API for automation. Solid SQL knowledge and understanding of performance tuning for Tableau extracts/live connections. Familiarity with cloud platforms (AWS or Azure) and experience in DevOps for BI is a plus. Experience supporting data analytics teams , enabling self-service BI and implementing usage governance. Tableau Server or Desktop Certification preferred
Posted 4 days ago
8.0 years
30 - 60 Lacs
India
Remote
Job Title: Senior Embedded Software Engineer – Routing & Networking Protocols Location: Remote Job Type: Full-time Experience: 8+ years Job Summary We are seeking a highly skilled Senior Embedded Software Engineer with deep expertise in routing protocols, data-plane forwarding, and networking stack integration to design and develop high-performance enterprise-grade networking appliances. The ideal candidate will have extensive experience in embedded systems, open-source routing stacks (FRR, BIRD), and cloud-integrated networking solutions . You will work on cutting-edge networking technologies, optimizing BGP, OSPF, MPLS, VXLAN, and SDN solutions while collaborating with cross-functional teams to deliver scalable, secure, and high-performance systems. Key Responsibilities Routing Protocol Development & Optimization Design, implement, and optimize routing protocols (BGP, OSPF, RIP, EIGRP, IS-IS) in embedded systems. Integrate and enhance FRR (Free Range Routing) stack with custom data-plane acceleration. Work on BIRD or other open-source routing stacks for performance tuning and feature enhancements. Develop fast-path forwarding mechanisms to improve packet processing efficiency. Data-Plane & Forwarding Technologies Implement and optimize L2/L3 forwarding, VXLAN, MPLS, Segment Routing, and tunneling protocols (GRE, VPN, MPLS VPNv4/v6). Enhance VRF-based transport networks for multi-tenancy and segmentation. Work on SDN (Software-Defined Networking) solutions for scalable distributed systems. Embedded Systems & Networking Stack Development Develop high-performance embedded software in C, C++, and Python for networking appliances. Debug and optimize kernel networking stacks, TCP/IP, UDP, ARP, DHCP, DNS, NAT, and Firewall functionalities. Ensure low-latency packet processing with hardware offload (DPDK, SmartNICs, or ASICs). Cloud & DevOps Integration Collaborate with cloud teams to deploy networking solutions on AWS, Azure, GCP, or OCI. Implement microservices, distributed computing, and security-first architectures for hybrid cloud deployments. Automate deployments using CI/CD pipelines, Infrastructure-as-Code (IaC), and DevOps practices. Cross-Functional Leadership Lead feature development independently with minimal supervision. Mentor junior engineers and conduct design reviews, code reviews, and performance benchmarking. Communicate technical proposals to senior management and stakeholders. Technical Stack & Skills Category Technologies & Skills Programming Languages C, C++ (17/20), Python Routing Protocols BGP, OSPF, RIP, EIGRP, IS-IS, MPLS, Segment Routing Open-Source Routing Stacks FRR (Free Range Routing), BIRD, Quagga/Zebra Data-Plane Technologies L2/L3 Switching, VXLAN, MPLS, VRF, GRE, VPN (IPSec, SSL) Networking Protocols TCP/IP, UDP, ARP, DHCP, DNS, NAT, Firewall Embedded Systems Linux Kernel Networking, DPDK, SmartNICs, ASICs Cloud & DevOps AWS/Azure/GCP, Kubernetes, Docker, CI/CD (Jenkins/GitLab), IaC (Terraform) SDN & Virtualization Open vSwitch, OpenFlow, NFV, Distributed Systems Certifications (Plus) CCNA/CCNP/CCIE, AWS/Azure Networking Specialty Qualifications & Experience Bachelor’s/Master’s in Computer Science, Electrical Engineering, or related field. 8+ years in embedded software development for networking appliances or enterprise-grade systems. 3+ years of independent feature ownership in routing/data-plane technologies. Hands-on experience with FRR, BIRD, or proprietary routing stacks. Strong debugging skills with Wireshark, tcpdump, gdb, Valgrind. Experience with SDN, microservices, and cloud architectures is a plus. Nice-to-Have Skills Cloud Networking (AWS Transit Gateway, Azure ExpressRoute, GCP Hybrid Connect). Design Thinking, Security-First Development, Full-Stack Awareness. Contributions to open-source networking projects (FRR, BIRD, Linux Kernel). Soft Skills Strong collaboration in startup-like agile environments. Excellent communication (written & verbal) for technical and executive audiences. Problem-solving mindset with a focus on scalability and performance. Why Join Us? Work on next-gen networking appliances with real-world impact. Opportunity to optimize open-source routing stacks at scale. Competitive salary, equity, and career growth in cutting-edge tech. Skills: nat,c,mpls,vxlan,rip,aws,azure,c++,linux kernel networking,kubernetes,tcp/ip,dhcp,dpdk,vrf,embedded software,udp,smartnics,ci/cd,gre,openflow,networking,l2/l3 switching,terraform,ospf,firewall,docker,data,open vswitch,asics,nfv,software,segment routing,bird,python,frr,gcp,arp,eigrp,bgp,is-is,routing,vpn,dns,embedded
Posted 4 days ago
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Line of Service Internal Firm Services Industry/Sector Not Applicable Specialism IFS - Internal Firm Services - Other Management Level Manager Job Description & Summary At PwC, our people in information technology operations focus on managing and maintaining the technology infrastructure and systems to provide smooth operations and efficient delivery of IT services. This includes monitoring network performance, troubleshooting issues, and implementing security measures. In service management at PwC, you will focus on overseeing and confirming the delivery of quality and timely services. You will monitor vendor compliance with contractual agreements for service quality, availability, and reliability, manage the business and delivery of services, and lead service recovery in case of major incidents. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations Job Description & Summary: A career as an L3 Database Administrator involves ensuring that databases are robust, secure, and efficient to support data-driven operations. The role entails continuous improvement of systems and contributing to the overall IT strategy while collaborating with other teams to meet application and infrastructure needs. Responsibilities: Administer databases using SQL Server, AWS RDBMS. Monitor database performance and implement tuning strategies. Automate database operations using AWS tools and scripting. Ensure data security, compliance, and access control. Collaborate with development and DevOps teams to support application data needs. Participate in gathering and analyzing user requirements for application design. Support identification and resolution of bugs and issues. Work with other IT teams to support application and infrastructure needs Mandatory skill sets: Experience on SQL Server installation, configuration, and design. Proficiency in database management systems (e.g., SQL Server, MySQL, AWS RDBMS, PostgreSQL). Strong knowledge of SQL and database programming. Ability to diagnose and solve complex database problems. Experience with large-scale databases and high-availability environments Preferred skill sets: ITIL V4 certification. Microsoft Certified Database Administrator (MCDBA) certification. Experience in implementing security measures and maintaining database integrity Years of experience required: 8 years and above. Education qualification: Any UG/PG Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor Degree, Master Degree Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills MySQL Optional Skills DevOps Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 5 days ago
4.0 - 10.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
This individual will play a crucial, client-facing role in Application Performance Monitoring (APM), User Experience Monitoring (UEM), and Site Reliability Engineering (SRE) solutions, translating client requirements into scalable and effective implementations. Valid Dynatrace certification is mandatory. Take complete charge of the Dynatrace Architecture, provide recommendations and design rightly sized infrastructure for Dynatrace in respective regions across the globe. Manage good strategy around optimal license consumption on an enterprise licensing module. Wisely create standards around RUM. Performance tuning, updating baseline monitoring, and assisting in filling any gaps in the existing monitoring environment. Provide hands-on, administrator support for Dynatrace with hand on coding skills (Java/Python/Shell scripting). Create API based self-services for mass updates to metadata, alerting profiles, notification systems, SLOs and other anomaly detection rules. Create standards around cloud monitoring via Dynatrace and cater to the needs in AWS/Azure and other public clouds. Create standards around OS, Application, DB, Network, Storage, Kubernetes monitoring via Dynatrace and cater to the needs of full stack observability. Dashboarding, Management Zones, Tagging, Alerting profiles and integrations for new application onboardings. Configuration of settings for monitoring, services, log analytics, anomaly detection, integration with 3rd party services, and general preferences. Analyze APM metrics to identify bottlenecks, latency issues, and slow database queries. Utilize visualizations and data provided from Dynatrace to deliver application and infrastructure monitoring information to key stakeholders and other technical resources. Utilize Dynatrace artificial intelligence to identify problems and root causes. Collaborate with staff in diagnosing and resolving issues, engaging other technical resources as needed to troubleshoot and resolve issues. Create and maintain technical documentation, and operating procedures. Present performance reports and recommendations to leadership teams Conduct training and knowledge transfer for staff. Ability to work within an offshore/onshore team structure. Knowledgeable about SRE tools, technologies and best practices. 4-10 years of experience with Dynatrace APM and UEM products. Possess experience in production support and scalable Architecture implementations. Structured approach, analytical thinking, and accuracy. Architect, Deployment, configuration, and maintenance of One Agent, Active Gate, Real User Monitoring (RUM), and Agentless RUM. Container, Cloud, and Virtual Machine monitoring via Dynatrace. Should have experience on CI/CD tools like Jenkins or Bamboo etc. Good to have Dynatrace Associate Certification certified. Good to have experience on Version control tools (GitHub/Bitbucket). Implement auto-remediation scripts using Dynatrace APIs, ServiceNow, Ansible or Terraform. Good to have knowledge on AI tools and technologies for creating Dynatrace solutions. Should be currently working as SRE Engineer and Valid Dynatrace Certification is mandatory. Hybrid mode - 3 days from office - Noida Please share CVs at ankit.kumar@celsiortech.com
Posted 5 days ago
4.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Senior DevOps Engineer Experience: 4 - 7 Years Exp Salary : Competitive Preferred Notice Period : Within 30 Days Opportunity Type: Onsite (Ahmedabad) Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills: Azure OR Docker, TensorFlow, Python OR Shell Scripting Attri (One of Uplers' Clients) is Looking for: Senior DevOps Engineer who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. What You'll Do (Responsibilities): Design, implement, and manage scalable, secure, and high-performance cloud-native infrastructure across Azure. Build and maintain Infrastructure as Code (IaC) using Terraform or CloudFormation. Develop event-driven and serverless architectures using AWS Lambda, SQS, and SAM. Architect and manage containerized applications using Docker, Kubernetes, ECR, ECS, or AKS. Establish and optimize CI/CD pipelines using GitHub Actions, Jenkins, AWS CodeBuild & CodePipeline. Set up and manage monitoring, logging, and alerting using Prometheus + Grafana, Datadog, and centralized logging systems. Collaborate with ML Engineers and Data Engineers to support MLOps pipelines (Airflow, ML Pipelines) and Bedrock with Tensorflow or PyTorch. Implement and optimize ETL/data streaming pipelines using Kafka, EventBridge, and Event Hubs. Automate operations and system tasks using Python and Bash, along with Cloud CLIs and SDKs. Secure infrastructure using IAM/RBAC and follow best practices in secrets management and access control. Manage DNS and networking configurations using Cloudflare, VPC, and PrivateLink. Lead architecture implementation for scalable and secure systems, aligning with business and AI solution needs. Conduct cost optimization through budgeting, alerts, tagging, right-sizing resources, and leveraging spot instances. Contribute to backend development in Python (Web Frameworks), REST/Socket and gRPC design, and testing (unit/integration). Participate in incident response, performance tuning, and continuous system improvement. Good to Have: Hands-on experience with ML lifecycle tools like MLflow and Kubeflow Previous involvement in production-grade AI/ML projects or data-intensive systems Startup or high-growth tech company experience Qualifications: Bachelor’s degree in Computer Science, Information Technology, or a related field. 5+ years of hands-on experience in a DevOps, SRE, or Cloud Infrastructure role. Proven expertise in multi-cloud environments (AWS, Azure, GCP) and modern DevOps tooling. Strong communication and collaboration skills to work across engineering, data science, and product teams. How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 5 days ago
0.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Risk Management Level Associate Job Description & Summary A career within Internal Audit services, will provide you with an opportunity to gain an understanding of an organisation’s objectives, regulatory and risk management environment, and the diverse needs of their critical stakeholders. We focus on helping organisations look deeper and see further considering areas like culture and behaviours to help improve and embed controls. In short, we seek to address the right risks and ultimately add value to their organisation. At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true saelves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within…. Responsibilities: Architecture Design: · Design and implement scalable, secure, and high-performance architectures for Generative AI applications. · Integrate Generative AI models into existing platforms, ensuring compatibility and performance optimization. Model Development and Deployment: · Fine-tune pre-trained generative models for domain-specific use cases. · Data Collection, Sanitization and Data Preparation strategy for Model fine tuning. · Well versed with machine learning algorithms like Supervised, unsupervised and Reinforcement learnings, Deep learning. · Well versed with ML models like Linear regression, Decision trees, Gradient boosting, Random Forest and K-means etc. · Evaluate, select, and deploy appropriate Generative AI frameworks (e.g., PyTorch, TensorFlow, Crew AI, Autogen, Langraph, Agentic code, Agent flow). Innovation and Strategy: · Stay up to date with the latest advancements in Generative AI and recommend innovative applications to solve complex business problems. · Define and execute the AI strategy roadmap, identifying key opportunities for AI transformation. · Good exposure to Agentic Design patterns Collaboration and Leadership: · Collaborate with cross-functional teams, including data scientists, engineers, and business stakeholders. · Mentor and guide team members on AI/ML best practices and architectural decisions. · Should be able to lead a team of data scientists, GenAI engineers and Software Developers. Performance Optimization: · Monitor the performance of deployed AI models and systems, ensuring robustness and accuracy. · Optimize computational costs and infrastructure utilization for large-scale deployments. Ethical and Responsible AI: · Ensure compliance with ethical AI practices, data privacy regulations, and governance frameworks. · Implement safeguards to mitigate bias, misuse, and unintended consequences of Generative AI. Mandatory skill sets: · Advanced programming skills in Python and fluency in data processing frameworks like Apache Spark. · Experience with machine learning, artificial Intelligence frameworks models and libraries (TensorFlow, PyTorch, Scikit-learn, etc.). · Should have strong knowledge on LLM’s foundational model (OpenAI GPT4o, O1, Claude, Gemini etc), while need to have strong knowledge on opensource Model’s like Llama 3.2, Phi etc. · Proven track record with event-driven architectures and real-time data processing systems. · Familiarity with Azure DevOps and other LLMOps tools for operationalizing AI workflows. · Deep experience with Azure OpenAI Service and vector DBs, including API integrations, prompt engineering, and model fine-tuning. Or equivalent tech in AWS/GCP. · Knowledge of containerization technologies such as Kubernetes and Docker. · Comprehensive understanding of data lakes and strategies for data management. · Expertise in LLM frameworks including Langchain, Llama Index, and Semantic Kernel. · Proficiency in cloud computing platforms such as Azure or AWS. · Exceptional leadership, problem-solving, and analytical abilities. · Superior communication and collaboration skills, with experience managing high-performing teams. · Ability to operate effectively in a dynamic, fast-paced environment. Preferred skill sets: · Experience with additional technologies such as Datadog, and Splunk. · Programming languages like C#, R, Scala · Possession of relevant solution architecture certificates and continuous professional development in data engineering and Gen AI. Years of experience required: 0-1 Years Education qualification: · BE / B.Tech / MCA / M.Sc / M.E / M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor in Business Administration, Master of Business Administration, Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Java Optional Skills Accepting Feedback, Accepting Feedback, Accounting and Financial Reporting Standards, Active Listening, Artificial Intelligence (AI) Platform, Auditing, Auditing Methodologies, Business Process Improvement, Communication, Compliance Auditing, Corporate Governance, Data Analysis and Interpretation, Data Ingestion, Data Modeling, Data Quality, Data Security, Data Transformation, Data Visualization, Emotional Regulation, Empathy, Financial Accounting, Financial Audit, Financial Reporting, Financial Statement Analysis, Generally Accepted Accounting Principles (GAAP) {+ 19 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 5 days ago
0.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Risk Management Level Associate Job Description & Summary A career within Internal Audit services, will provide you with an opportunity to gain an understanding of an organisation’s objectives, regulatory and risk management environment, and the diverse needs of their critical stakeholders. We focus on helping organisations look deeper and see further considering areas like culture and behaviours to help improve and embed controls. In short, we seek to address the right risks and ultimately add value to their organisation. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true saelves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within…. Responsibilities: Architecture Design: · Design and implement scalable, secure, and high-performance architectures for Generative AI applications. · Integrate Generative AI models into existing platforms, ensuring compatibility and performance optimization. Model Development and Deployment: · Fine-tune pre-trained generative models for domain-specific use cases. · Data Collection, Sanitization and Data Preparation strategy for Model fine tuning. · Well versed with machine learning algorithms like Supervised, unsupervised and Reinforcement learnings, Deep learning. · Well versed with ML models like Linear regression, Decision trees, Gradient boosting, Random Forest and K-means etc. · Evaluate, select, and deploy appropriate Generative AI frameworks (e.g., PyTorch, TensorFlow, Crew AI, Autogen, Langraph, Agentic code, Agent flow). Innovation and Strategy: · Stay up to date with the latest advancements in Generative AI and recommend innovative applications to solve complex business problems. · Define and execute the AI strategy roadmap, identifying key opportunities for AI transformation. · Good exposure to Agentic Design patterns Collaboration and Leadership: · Collaborate with cross-functional teams, including data scientists, engineers, and business stakeholders. · Mentor and guide team members on AI/ML best practices and architectural decisions. · Should be able to lead a team of data scientists, GenAI engineers and Software Developers. Performance Optimization: · Monitor the performance of deployed AI models and systems, ensuring robustness and accuracy. · Optimize computational costs and infrastructure utilization for large-scale deployments. Ethical and Responsible AI: · Ensure compliance with ethical AI practices, data privacy regulations, and governance frameworks. · Implement safeguards to mitigate bias, misuse, and unintended consequences of Generative AI. Mandatory skill sets: · Advanced programming skills in Python and fluency in data processing frameworks like Apache Spark. · Experience with machine learning, artificial Intelligence frameworks models and libraries (TensorFlow, PyTorch, Scikit-learn, etc.). · Should have strong knowledge on LLM’s foundational model (OpenAI GPT4o, O1, Claude, Gemini etc), while need to have strong knowledge on opensource Model’s like Llama 3.2, Phi etc. · Proven track record with event-driven architectures and real-time data processing systems. · Familiarity with Azure DevOps and other LLMOps tools for operationalizing AI workflows. · Deep experience with Azure OpenAI Service and vector DBs, including API integrations, prompt engineering, and model fine-tuning. Or equivalent tech in AWS/GCP. · Knowledge of containerization technologies such as Kubernetes and Docker. · Comprehensive understanding of data lakes and strategies for data management. · Expertise in LLM frameworks including Langchain, Llama Index, and Semantic Kernel. · Proficiency in cloud computing platforms such as Azure or AWS. · Exceptional leadership, problem-solving, and analytical abilities. · Superior communication and collaboration skills, with experience managing high-performing teams. · Ability to operate effectively in a dynamic, fast-paced environment. Preferred skill sets: · Experience with additional technologies such as Datadog, and Splunk. · Programming languages like C#, R, Scala · Possession of relevant solution architecture certificates and continuous professional development in data engineering and Gen AI. Years of experience required: 0-1 Years Education qualification: · BE / B.Tech / MCA / M.Sc / M.E / M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor in Business Administration, Master of Business Administration, Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Java Optional Skills Accepting Feedback, Accepting Feedback, Accounting and Financial Reporting Standards, Active Listening, Artificial Intelligence (AI) Platform, Auditing, Auditing Methodologies, Business Process Improvement, Communication, Compliance Auditing, Corporate Governance, Data Analysis and Interpretation, Data Ingestion, Data Modeling, Data Quality, Data Security, Data Transformation, Data Visualization, Emotional Regulation, Empathy, Financial Accounting, Financial Audit, Financial Reporting, Financial Statement Analysis, Generally Accepted Accounting Principles (GAAP) {+ 19 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 5 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Summary Prompt Engineering Crafting effective prompts for models like GPT DALLE and Codex. Understanding prompt tuning chaining and context management. API Integration Using APIs from OpenAI Hugging Face Cohere etc. Handling authentication rate limits and response parsing. Model Fine-Tuning & Customization Fine-tuning open-source models (e.g. LLaMA Mistral Falcon). Using tools like LoRA PEFT and Hugging Face Transformers. Responsibilities Prompt Engineering Crafting effective prompts for models like GPT DALLE and Codex. Understanding prompt tuning chaining and context management. API Integration Using APIs from OpenAI Hugging Face Cohere etc. Handling authentication rate limits and response parsing. Model Fine-Tuning & Customization Fine-tuning open-source models (e.g. LLaMA Mistral Falcon). Using tools like LoRA PEFT and Hugging Face Transformers. Data Engineering for AI Collecting cleaning and preparing datasets for training or inference. Understanding tokenization and embeddings. LangChain / LlamaIndex Building AI-powered apps with memory tools and retrieval-augmented generation (RAG). Connecting LLMs to external data sources like PDFs databases or APIs. Vector Databases Using Pinecone Weaviate FAISS or Chroma for semantic search and RAG. Understanding embeddings and similarity search. Frontend + GenAI Integration Building GenAI-powered UIs with React Next.js or Flutter. Integrating chatbots image generators or code assistants 8. Tools : OpenAI HuggingFace LangChain/LamaIndex
Posted 5 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France