Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Mandatory Skills 4-6 years of exp with basic proficiency in Python, SQL and familiarity with libraries like NumPy or Pandas. Understanding of fundamental programming concepts (data structures, algorithms, etc.). Eagerness to learn new tools and frameworks, including Generative AI technologies. Familiarity with version control systems (e.g., Git). Strong problem-solving skills and attention to detail. Exposure to data processing tools like Apache Spark or PySpark, SQL. Basic understanding of APIs and how to integrate them. Interest in AI/ML and willingness to explore frameworks like LangChain. Familiarity with cloud platforms (AWS, Azure, or GCP) is a plus Job Description We are seeking a motivated Python Developer to join our team. The ideal candidate will have a foundational understanding of Python programming, SQL and a passion for learning and growing in the field of software development. You will work closely with senior developers and contribute to building and maintaining applications, with opportunities to explore Generative AI frameworks and data processing tools. Key Responsibilities Assist in developing and maintaining Python-based applications. Write clean, efficient, and well-documented code. Collaborate with senior developers to integrate APIs and frameworks. Support data processing tasks using libraries like Pandas or PySpark. Learn and work with Generative AI frameworks (e.g., LangChain, LangGraph) under guidance. Debug and troubleshoot issues in existing applications.
Posted 1 week ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description As a GCP Data Engineer, you will integrate data from various sources into novel data products. You will build upon existing analytical data, including merging historical data from legacy platforms with data ingested from new platforms. You will also analyze and manipulate large datasets, activating data assets to enable enterprise platforms and analytics within GCP. You will design and implement the transformation and modernization on GCP, creating scalable data pipelines that land data from source applications, integrate into subject areas, and build data marts and products for analytics solutions. You will also conduct deep-dive analysis of Current State Receivables and Originations data in our data warehouse, performing impact analysis related to Ford Credit North America's modernization and providing implementation solutions. Moreover, you will partner closely with our AI, data science, and product teams, developing creative solutions that build the future for Ford Credit. Experience with large-scale solutions and operationalizing data warehouses, data lakes, and analytics platforms on Google Cloud Platform or other cloud environments is a must. We are looking for candidates with a broad set of analytical and technology skills across these areas and who can demonstrate an ability to design the right solutions with the appropriate combination of GCP and 3rd party technologies for deployment on Google Cloud Platform. Responsibilities Design and build production data engineering solutions on Google Cloud Platform (GCP) using services such as BigQuery, Dataflow, DataForm, Astronomer, Data Fusion, DataProc, Cloud Composer/Air Flow, Cloud SQL, Compute Engine, Cloud Functions, Cloud Run, Artifact Registry, GCP APIs, Cloud Build, App Engine, and real-time data streaming platforms like Apache Kafka and GCP Pub/Sub. Design new solutions to better serve AI/ML needs. Lead teams to expand our AI-enabled services. Partner with governance teams to tackle key business needs. Collaborate with stakeholders and cross-functional teams to gather and define data requirements and ensure alignment with business objectives. Partner with analytics teams to understand how value is created using data. Partner with central teams to leverage existing solutions to drive future products. Design and implement batch, real-time streaming, scalable, and fault-tolerant solutions for data ingestion, processing, and storage. Create insights into existing data to fuel the creation of new data products. Perform necessary data mapping, impact analysis for changes, root cause analysis, and data lineage activities, documenting information flows. Implement and champion an enterprise data governance model. Actively promote data protection, sharing, reuse, quality, and standards to ensure data integrity and confidentiality. Develop and maintain documentation for data engineering processes, standards, and best practices. Ensure knowledge transfer and ease of system maintenance. Utilize GCP monitoring and logging tools to proactively identify and address performance bottlenecks and system failures. Provide production support by addressing production issues as per SLAs. Optimize data workflows for performance, reliability, and cost-effectiveness on the GCP infrastructure. Work within an agile product team. Deliver code frequently using Test-Driven Development (TDD), continuous integration, and continuous deployment (CI/CD). Continuously enhance your domain knowledge. Stay current on the latest data engineering practices. Contribute to the company's technical direction while maintaining a customer-centric approach. Qualifications GCP certified Professional Data Engineer Successfully designed and implemented data warehouses and ETL processes for over five years, delivering high-quality data solutions. 5+ years of complex SQL development experience 2+ experience with programming languages such as Python, Java, or Apache Beam. Experienced cloud engineer with 3+ years of GCP expertise, specializing in managing cloud infrastructure and applications to production-scale solutions. In-depth understanding of GCP’s underlying architecture and hands-on experience of crucial GCP services, especially those related to data processing (Batch/Real Time) leveraging Terraform, Big Query, Dataflow, Pub/Sub, Data form, astronomer, Data Fusion, DataProc, Pyspark, Cloud Composer/Air Flow, Cloud SQL, Compute Engine, Cloud Functions, Cloud Run, Cloud build and App Engine, alongside and storage including Cloud Storage DevOps tools such as Tekton, GitHub, Terraform, Docker. Expert in designing, optimizing, and troubleshooting complex data pipelines. Experience developing and deploying microservices architectures leveraging container orchestration frameworks Experience in designing pipelines and architectures for data processing. Passion and self-motivation to develop/experiment/implement state-of-the-art data engineering methods/techniques. Self-directed, work independently with minimal supervision, and adapts to ambiguous environments. Evidence of a proactive problem-solving mindset and willingness to take the initiative. Strong prioritization, collaboration & coordination skills, and ability to simplify and communicate complex ideas with cross-functional teams and all levels of management. Proven ability to juggle multiple responsibilities and competing demands while maintaining a high level of productivity. Master’s degree in computer science, software engineering, information systems, Data Engineering, or a related field. Data engineering or development experience gained in a regulated financial environment. Experience in coaching and mentoring Data Engineers Project management tools like Atlassian JIRA Experience working in an implementation team from concept to operations, providing deep technical subject matter expertise for successful deployment. Experience with data security, governance, and compliance best practices in the cloud. Experience using data science concepts on production datasets to generate insights
Posted 1 week ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Position Overview Job Title: Technology Service Analyst, AS Location: Bangalore, India Role Description You will be operating within Production services team of Trade Finance and Lending domain which is a subdivision of Corporate Bank Production Services as a Production Support Engineer. In this role, you will be accountable for the following: To resolve user request supports, troubleshooting functional, application, and infrastructure incidents in the production environment. work on identified initiatives to automate manual work, application and infrastructure monitoring improvements and platform hygiene. Eyes on glass monitoring of services and batch. Preparing and fulfilling data requests. Participation in incident, change and problem management meetings as required. Deutsche Bank’s Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international, and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support, and opportunities to excel. You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support. What We’ll Offer You As part of our flexible scheme, here are just some of the benefits that you’ll enjoy. Best in class leave policy. Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities Provide hands on technical support for a suite of applications/platforms within Deutsche Bank Build up technical subject matter expertise on the applications/platforms being supported including business flows, the application architecture and the hardware configuration. Resolve service requests submitted by the application end users to the best of L2 ability and escalate any issues that cannot be resolved to L3. Conduct real time monitoring to ensure application SLAs are achieved and maximum application availability (up time). Ensure all knowledge is documented and that support runbooks and knowledge articles are kept up to date. Approach support with a proactive attitude, working to improve the environment before issues occur. Update the RUN Book and KEDB as & when required. Participate in all BCP and component failure tests based on the run books Understand flow of data through the application infrastructure. It is critical to understand the dataflow so as to best provide operational support Your Skills And Experience Must Have : Programming Language - Java Operating systems -UNIX, Windows and the underlying infrastructure environments. Middleware - (e.g. MQ, Kafka or similar) WebLogic, Webserver environment - Apache, Tomcat Database - Oracle, MS-SQL, Sybase, No SQL Batch Monitoring - Control-M /Autosys Scripting - UNIX shell and PowerShell, PERL, Python Monitoring Tools – Geneos or App Dynamics or Dynatrace or Grafana ITIL Service Management framework such as Incident, Problem, and Change processes. Preferably knowledge and experience on GCP. Nice to Have : 5+ years of experience in IT in large corporate environments, specifically in the area of controlled production environments or in Financial Services Technology in a client-facing function Good analytical and problem-solving skills ITIL / best practice service context. ITIL foundation is plus. Ticketing Tool experience – Service Desk, Service Now. Understanding of SRE concepts (SLA, SLO’s, SLI’s) Knowledge and development experience in Ansible automation. Working knowledge of one cloud platform (AWS or GCP). Excellent communication skills, both written and verbal, with attention to detail. Ability to work in virtual teams and in matrix structures. How We’ll Support You Training and development to help you excel in your career. Coaching and support from experts in your team A culture of continuous learning to aid progression. A range of flexible benefits that you can tailor to suit your needs. About Us And Our Teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 1 week ago
5.0 - 8.0 years
15 - 22 Lacs
Gurugram
Work from Office
Experience: 6- 8 years overall, with at least 23 years deep hands-on experience in each key area below. What you’ll do Own and evolve our end-to-end data platform, ensuring robust pipelines, data lakes, and warehouses with 100% uptime. Build and maintain real-time and batch pipelines using Debezium, Kafka, Spark, Apache Iceberg, Trino, and Clickhouse. Manage and optimize our databases (PostgreSQL, DocumentDB, MySQL RDS) for performance and reliability. Drive data quality management — understand, enrich, and maintain context for trustworthy insights. Develop and maintain reporting services for data exports, file deliveries, and embedded dashboards via Apache Superset. Use orchestration tools like Maestro (or similar DAGs) for reliable, observable workflows. Leverage LLMs and other AI models generate insights and automate agentic tasks that enhance analytics and reporting. Build domain expertise to solve complex data problems and deliver actionable business value. Collaborate with analysts, data scientists, and engineers to maximize the impact of our data assets. Write robust, production-grade Python code for pipelines, automation, and tooling. What you’ll bring Experience with our open-source data pipeline and datalike, warehouse stack Strong Python skills for data workflows and automation. Hands-on orchestration experience with Maestro, Airflow, or similar. Practical experience using LLMs or other AI models for data tasks. Solid grasp of data quality, enrichment, and business context. Experience with dashboards and BI using Apache Superset (or similar tools). Strong communication and problem-solving skills.
Posted 1 week ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Rockwell Automation is a global technology leader focused on helping the world’s manufacturers be more productive, sustainable, and agile. With more than 28,000 employees who make the world better every day, we know we have something special. Behind our customers - amazing companies that help feed the world, provide life-saving medicine on a global scale, and focus on clean water and green mobility - our people are energized problem solvers that take pride in how the work we do changes the world for the better. We welcome all makers, forward thinkers, and problem solvers who are looking for a place to do their best work. And if that’s you we would love to have you join us! Job Description Rockwell Automation is a global technology leader focused on helping the world's manufacturers be more productive, sustainable, and agile. With more than 28,000 employees who make the world better every day, we know we have something special. Behind our customers - amazing companies that help feed the world, provide life-saving medicine on a global scale, and focus on clean water and green mobility - our people are energized problem solvers that take pride in how the work we do changes the world for the better. We welcome all makers, forward thinkers, who are looking for a place to do their best work. And if that's you we would love to have you join us! Job DescriptionJob Summary: We are seeking a talented and highly motivated Adobe Experience Manager (AEM) Full Stack Developer to join our team. In this role, you will be responsible for designing, developing, and maintaining web applications using Adobe Experience Manager (AEM) Your expertise in front-end and back-end development will play a crucial role in delivering high-quality and engaging digital experiences to our users. Design, develop, test, and implement custom components, templates, workflows, multi-site management including translation framework and integrations using Adobe Experience Manager (AEM) to support various digital initiatives and projects. Work with UX/UI designers to translate design mockups into responsive web pages, ensuring a seamless user experience across different devices and browsers. Implement server-side logic, content repositories, and application logic using technologies such as Java, Apache Sling, CRX, JCR and OSGi in the AEM framework. Customize and extend AEM functionalities using Adobe's framework, HTL/Sightly, and client-side scripting languages (JavaScript, jQuery). Integrate AEM with other enterprise systems and third-party applications using RESTful APIs, JSON, and other web services. Identify and resolve performance bottlenecks in AEM applications to ensure optimal website performance and page load times. Implement and adhere to best practices for securing AEM applications, preventing vulnerabilities, and ensuring data privacy and compliance. Collaborate with content authors and editors to define content structures, templates, and workflows that optimize content creation and publishing processes. Conduct thorough testing of AEM applications to ensure they meet functional and performance requirements. Provide technical support and troubleshoot issues related to AEM applications, ensuring timely resolution of problems. Maintain clear and comprehensive technical documentation related to AEM projects, codebases, and configurations. Work closely with multiple cross-functional teams, including UX/UI designers, front-end developers, back-end developers, product owners and project managers to deliver successful projects. Bachelor's degree in computer science, software Engineering, or a related field. Proven experience in AEM development, with at least 4+ years of hands-on experience in AEM 6.5. Proven experience and solid knowledge of DevOps (Jenkins, JFrog Artifactory, Git, Adobe Cloud Manager) Proven experience in Search technologies (Lucidworks, Apache Solr). Experience in integrating data and systems using MuleSoft and Salesforce is a plus. Proficiency in both front-end and back-end development technologies including, HTML5, CSS3, React js,JavaScript, Java, Apache Sling, OSGi, Maven, CRX, JCR etc. Strong understanding of AEM architecture, components, templates, and workflows. Experience with AEM Sites, Assets, Dynamic Media and AEM guides, Adobe Analytics, Adobe Target, Adobe Launch, Adobe Forms is a plus. Familiarity with integrating AEM with third-party systems and APIs. The Preferred - You Might Also Have Excellent analytical and problem-solving skills, with the ability to think creatively and propose innovative solutions. Strong verbal and written communication skills, with the ability to articulate technical concepts to non-technical stakeholders. Collaborative mindset with the ability to work effectively in a team-oriented environment. Demonstrated ability to adapt to changing technologies and project requirements. AEM Developer Certification is a plus, but not mandatory. What We Offer Our benefits package includes … Comprehensive mindfulness programs with a premium membership to Calm Volunteer Paid Time off available after 6 months of employment for eligible employees. Company volunteer and donation matching program – Your volunteer hours or personal cash donations to an eligible charity can be matched with a charitable donation. Employee Assistance Program Personalized wellbeing programs through our OnTrack program On-demand digital course library for professional development... and other local benefits! At Rockwell Automation we are dedicated to building a diverse, inclusive and authentic workplace, so if you're excited about this role but your experience doesn't align perfectly with every qualification in the job description, we encourage you to apply anyway. You may be just the right person for this or other roles. or Rockwell Automation's hybrid policy aligns that employees are expected to work at a Rockwell location at least Mondays, Tuesdays, and Thursdays unless they have a business obligation out of the office. Rockwell Automation’s hybrid policy aligns that employees are expected to work at a Rockwell location at least Mondays, Tuesdays, and Thursdays unless they have a business obligation out of the office.
Posted 1 week ago
4.0 - 8.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Job Overview: We are seeking a skilled DataOps Engineer with a strong foundation in DevOps practices and Data Engineering principles. The ideal candidate will be responsible for ensuring smooth deployment, observability, and performance optimization of data pipelines and platforms. You will work at the intersection of software engineering, DevOps, and data engineering bridging gaps between development, operations, and data teams. Key Responsibilities: Design, implement, and manage CI/CD pipelines using tools such as Jenkins, Git, and Terraform. Manage and maintain Kubernetes (K8s) clusters for scalable and resilient data infrastructure. Develop and maintain observability tools and dashboards (e.g., Prometheus, Grafana, ELK stack) for monitoring pipeline and platform health. Automate infrastructure provisioning and deployments using Infrastructure as Code (IaC) tools, preferably Terraform. Collaborate with data engineers to debug, optimize, and track performance of data pipelines (e.g., Airflow, Airbyte, etc.). Implement and monitor data quality, lineage, and orchestration workflows. Develop custom scripts and tools in Python to enhance pipeline reliability and automation. Work closely with data teams to manage and optimize Snowflake environments, focusing on performance tuning and cost efficiency. Ensure compliance with security, scalability, and operational best practices across the data platform. Act as a liaison between development and operations to maintain SLAs for data availability and reliability. Required Skills & Experience: 4-8 years of experience in DevOps / DataOps / Platform Engineering roles. Proficient in managing Kubernetes clusters and associated tooling (Helm, Kustomize, etc.). Hands-on experience with CI/CD pipelines, especially using Jenkins, GitOps, and automated testing frameworks. Strong scripting and automation skills in Python. Experience with workflow orchestration tools like Apache Airflow and data ingestion tools like Airbyte. Solid experience with Infrastructure as Code tools, preferably Terraform. Familiarity with observability and monitoring tools such as Prometheus, Grafana, Datadog, or New Relic. Working knowledge of data platforms, particularly Snowflake, including query performance tuning and monitoring. Strong debugging and problem-solving skills, especially in production data pipeline scenarios. Excellent communication skills and ability to collaborate across engineering, operations, and analytics teams. Preferred Qualifications: Experience with cloud platforms (AWS, and/or GCP) and cloud-native DevOps practices. Familiarity with data cataloging and lineage tools. Exposure to container security, policy management, and data governance tools. Background in data modeling, SQL optimization, or data warehousing concepts is a plus.
Posted 1 week ago
3.0 - 5.0 years
5 - 7 Lacs
Hyderabad
Work from Office
Key Responsibilities: Design, develop, and maintain large-scale data processing workflows using big data technologies Develop ETL/ELT pipelines to ingest, clean, transform, and aggregate data from various sources Work with distributed computing frameworks such as Apache Hadoop , Spark , Flink , or Kafka Optimize data pipelines for performance, reliability, and scalability Collaborate with data scientists, analysts, and engineers to support data-driven projects Implement data quality checks and validation mechanisms Monitor and troubleshoot data processing jobs and infrastructure Document data workflows, architecture, and processes for team collaboration and future maintenance
Posted 1 week ago
3.0 - 6.0 years
5 - 8 Lacs
Gurugram, Chennai
Work from Office
Responsibilities Work with development teams and product managers to ideate software solutions Design client-side and server-side architecture Build the front-end of applications through appealing visual design Develop and manage well-functioning databases and applications Write effective APIs Test software to ensure responsiveness and efficiency Troubleshoot, debug and upgrade software Exposure to security aspects of the application Build features and applications with a mobile responsive design Write technical documentation Work with data scientists and analysts to improve software Requirements and skills Proven experience as a Full Stack Developer or similar role Experience developing desktop and mobile applications Familiarity with common stacks Knowledge of multiple front-end languages and libraries (e.g. HTML/ CSS, JavaScript, XML, jQuery) Knowledge of back-end language (e.g. Java) and JavaScript frameworks (e.g. Angular, React, Node.js) Familiarity with databases (e.g. MS SQL, MySQL, MongoDB), web servers (e.g. Apache) and UI/UX design Exposure to cloud technology and native services S3, Lambda etc. Exposure to streaming / messaging tool like Kafka Exposure to deployment tools, containerization & Kubernetes. Exposure to ELK Stack / Splunk. Excellent communication and teamwork skills Great attention to detail Organizational skills An analytical mind Degree in Computer Science Work Environment This is a Hybrid Work Role where you are expected to be in office three days a week We offer you a competitive total rewards package, continuing education & training, and tremendous potential with a growing worldwide organization. DISCLAIMER: Nothing in this job description restricts managements right to assign or reassign duties and responsibilities of this job to other entities; including but not limited to subsidiaries, partners, or purchasers of Alight business units. .
Posted 1 week ago
5.0 - 10.0 years
7 - 12 Lacs
Kolkata, Mumbai, New Delhi
Work from Office
We need a Server Expert to join our team! The ideal candidate would have at least 5 years of experience Roles And Responsibilities Need 5-10 years of experience in web hosting, managing , CentOS, CloudLinux, Ubuntu, , FreeBSD, and Windows Servers. Linux - Ubuntu / Debian / CentOS / RHEL / Fedora / Azure/Amazon AWS/Google Cloud Nginx/Apache/Lighttpd/LiteSpeed Dedicated server manage Linux/Cpanel Server Setup Server Security - csf/bit ninja/ imunify360 PHP - Script Installation/Custom coding/Laravel WHMCS Installation Windows server setup Email expert Any website migration database optimization server uptime checker
Posted 1 week ago
1.0 - 8.0 years
3 - 10 Lacs
Bengaluru
Work from Office
Primary Responsibilities F5xc SRE: Play the role of a hands-on SRE Engineer focused on automation and toil-reduction and participate in Ops cycles to support our product. Perform oncall support function on a rotation basis, providing timely resolution of issues and ensuring operational excellence in managing and maintaining distributed networking and security products Easy-to-Use Automation: Continue to grow the infra-automation (k8s, ArgoCD, Helm Charts, Golang services, AWS, GCP, Terraform) with a focus on ease of configuration Environment Stability using Observability: Create and continue to evolve existing Observability (metrics & alerts) and participate in regular monitoring of infrastructure for stability. Collaborative Engagement: Collaborate closely with application owners and SRE team members as part of roadmap execution and continuous improvement of existing systems. Scale & Resilient systems: Design & deploy systems/infra which is highly available and resilient for the configured failure domains. Design systems using strong security principles with security by default. Knowledge, Skills and Abilities Elasticsearch : Deep understanding of indexing strategies, query optimization, cluster management, and tuning for high-throughput use cases. Familiarity with slow query analysis, scaling, and shard management. ClickHouse : Proven experience in designing and managing OLAP workloads, optimizing query performance, and implementing efficient table engines and materialized views. Apache Kafka : Expertise in event streaming architecture, topic design, producer/consumer configuration, and handling high-volume, low-latency data pipelines. Experience with Kafka Connect and Schema Registry is a plus. Vector (Datadog/Timber.io/Logs) : Proficiency in configuring Vector for observability pipelines, including log transformation, enrichment, and routing to multiple sinks (e.g., Elasticsearch, S3, ClickHouse). Hands-on experience with the Cortex suite of observability tools, including Cortex, Loki, Tempo, and Prometheus integration for scalable, multi-tenant monitoring systems. Familiar with integrating Cortex/Mimir with Grafana dashboards, Thanos, or Prometheus Remote Write to support observability-as-a-service use cases . Hands-on programming experience in any one language python,golang + shell scripting. Strong networking fundamentals and experience dealing with different layers of the networking stack. SRE/Devops on Linux & Kubernetes: Demonstrate excellent, hands-on knowledge of deploying workloads and managing lifecyle on kubernetes, with practical experience on debugging issues. Experience in upgrading workloads for SaaS Services without downtime. Oncall Experience in managing everyday OPs for production environments. Experience in production alerts management and using dashboards to debug issues. GipOps: Experience with helm charts/kustomizations and gitops tools like ArgoCD/FluxCD. CI/CD: Experience working with/designing functional CI/CD systems. Cloud Infrastructure: Prior experience in deploying workloads and managing lifecycle on any cloud provider (AWS/GCP/Azure) Equal Employment Opportunity .
Posted 1 week ago
4.0 - 6.0 years
6 - 8 Lacs
Pune
Work from Office
About the Role Mindbowser needs a WordPress Web Engineer who lives and breathes Advanced Custom Fields and keeps every site form in sync through Zoho Forms. You ll own the entire website build-and-publish cycle, making pages fast, secure, and data-ready for marketing. Responsibilities Design and build pixel-perfect pages, blogs, and on-site elements that follow Mindbowser s brand guide Maintain a reusable ACF component library so marketing can launch new sections quickly Integrate WordPress forms and workflows with Zoho Forms; ensure clean field mapping and error-free lead capture Create and manage Zoho Forms templates, conditional rules, and notifications to support automation Automate image compression, backups, and scheduled publishing Pilot AI add-ons ( chat widgets, content helpers, calculators, etc) and track results Audit plugins, theme code, and database queries; remove bloat and tighten security Monitor Core Web Vitals with Lighthouse and fix issues before they affect traffic Must-have skills 3+ years of WordPress development with custom themes, hooks, filters, and ACF Pro (flexible content, repeaters, CPTs) Proven WordPress Zoho Forms integrations and rel
Posted 1 week ago
7.0 - 12.0 years
9 - 14 Lacs
Bengaluru
Work from Office
MerQube is a cutting-edge fintech firm, specializing in the development of advanced technology for indexing and rules-based investing. We offer innovative solutions for designing and calculating complex, rules-based strategies. Founded in 2019 by a team of index industry veterans and technology experts, MerQube provides a tech-focused alternative in the indexing space, with offices in New York, San Francisco, Bangalore and London. MerQube designs and calculates a wide variety of indices, including thematic, ESG, QIS, and delta one strategies, spanning multiple asset classes such as equities, futures, and options. Leveraging cloud-based architecture and advanced index-tracking technology, our platform enables clients to bring new ideas to market swiftly and efficiently. Summary Are you keen to work in an environment that s stimulating and convivial with countless opportunities for growth and plenty of freedom to make a real impact? This could be the place for you! We are looking for our next Full stack Staff Backend Engineer based in Bangalore, India. Supported by and reporting to the Director of Engineering, you will be joining a friendly and growing team to disrupt the Index space and participate in the next phase of our growth. What will you do? As part of the Platform Engineering team, you will design, build and operate large scale services leveraging public clouds to build an industry leading platform for financial indices. The platform supports onboarding new customers, ingesting and processing large amounts of data from a variety of sources and computing a variety of financial indices. We are looking for self-driven engineers who are comfortable working in a collaborative and fast paced environment with attention to detail. You will work closely with functional teams to build and improve MerQubes index computation and management systems, internal and external index management and research tools, and data dissemination systems. What the position requires: Bachelor s Degree in Computer Science, Engineering or equivalent work experience. 7+ years experience building production services using Python/Go/C++/Java 3+ years experience building production frontends using React Experience designing, building and operating complex features as part of a team. Preferred Qualifications Familiarity with public cloud environments (GCP/AWS) or a strong desire and aptitude to ramp-up on them. Experience developing complex backend applications Experience leading an engineering team to deliver a project SQL, database modeling Experience with pandas, numpy, scipy Familiarity with data processing pipelines and workflows (Apache Airflow or similar) FastAPI, Flask, UWSGI, OpenAPI, Pydantic Docker, Helm, Kubernetes, EKS We believe in creating and preserving an environment of collaboration and growth for all team members, and take steps every day to promote inclusivity, wellness, and fun. With these commitments in mind, we are proud to offer: Competitive compensation packages Stellar full-time benefits, including medical, dental, vision, and more Flexible working arrangements, including opportunities to work remotely Community-first environment (we want your colleagues to be your friends!) Focus on health, wellness, and work-life balance Opportunities to learn, develop, and grow PTO, holiday, and sick time Equal Opportunity Employer MerQube is committed to building a diverse and inclusive team. All qualified applicants will be considered without regard to race, color, religion, sex, sexual orientation, gender identity or expression, age, national origin, disability, protected veteran status, or any other factor protected by applicable federal, state, or local laws. If you re the best person for the job, we want you on board! This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation, and training.
Posted 1 week ago
9.0 - 14.0 years
30 - 35 Lacs
Gurugram
Work from Office
About this role Are you interested in building innovative technology that crafts the financial markets? Do you like working at the speed of a startup, and solving some of the world s most exciting challenges? Do you want to work with, and learn from, hands-on leaders in technology and finance? At BlackRock, we are looking for Software Engineers who like to innovate and solve sophisticated problems. We recognize that strength comes from diversity, and will embrace your outstanding skills, curiosity, and passion while giving you the opportunity to grow technically and as an individual. We invest and protect over $11.6 trillion (USD) of assets and have an extraordinary responsibility to our clients all over the world. Our technology empowers millions of investors to save for retirement, pay for college, buy a home, and improve their financial well-being. Being a technologist at BlackRock means you get the best of both worlds: working for one of the most sophisticated financial companies and being part of a software development team responsible for next generation technology and solutions. What are Aladdin and Aladdin Engineering? You will be working on BlackRocks investment operating system called Aladdin. Aladdin is used both internally within BlackRock and externally by many financial institutions. Aladdin combines sophisticated risk analytics with comprehensive portfolio management, trading, and operations tools on a single platform to power informed decision-making and create a connective tissue for thousands of users investing worldwide. Our development teams reside inside the Aladdin Engineering group. We collaboratively build the next generation of technology that changes the way information, people, and technology intersect for global investment firms. We build and package tools that manage trillions in assets and supports millions of financial instruments. We perform risk calculations and process millions of transactions for thousands of users every day worldwide! Being a member of Aladdin Engineering, you will be: Tenacious: Work in a fast paced and highly complex environment Creative thinker: Analyse multiple solutions and deploy technologies in a flexible way. Great teammate: Think and work collaboratively and communicate effectively. Fast learner: Pick up new concepts and apply them quickly. Responsibilities include: Collaborate with team members in a multi-office, multi-country environment. Deliver high efficiency, high availability, concurrent and fault tolerant software systems. Significantly contribute to development of Aladdin s global, multi-asset trading platform. Work with product management and business users to define the roadmap for the product. Design and develop innovative solutions to complex problems, identifying issues and roadblocks. Apply validated quality software engineering practices through all phases of development. Ensure resilience and stability through quality code reviews, unit, regression and user acceptance testing, dev ops and level two production support. Be a leader with vision and a partner in brainstorming solutions for team productivity, efficiency, guiding and motivating others. Drive a strong culture by bringing principles of inclusion and diversity to the team and setting the tone through specific recruiting, management actions and employee engagement. Candidate should be able to lead individual projects priorities, deadlines and deliverables using AGILE methodologies. Qualifications: B.E./ B.TECH./ MCA or any other relevant engineering degree from a reputed university. 9+ years of proven experience Skills and Experience: A proven foundation in core Java and related technologies, with OO skills and design patterns Hands-on experience in designing and writing code with object-oriented programming knowledge in Java, Spring, TypeScript, JavaScript, Microservices, Angular , React. Strong knowledge of Open-Source technology stack (Spring, Hibernate, Maven, JUnit, etc.). Exposure to building microservices and APIs ideally with REST, Kafka or gRPC. Experience with relational database and/or NoSQL Database (e.g., Apache Cassandra) Exposure to high scale distributed technology like Kafka, Mongo, Ignite, Redis Track record building high quality software with design-focused and test-driven approaches Great analytical, problem-solving and communication skills Some experience or a real interest in finance, investment processes, and/or an ability to translate business problems into technical solutions. Candidate should have experience leading development teams, projects or being responsible for the design and technical quality of a significant application, system, or component. Ability to form positive relationships with partnering teams, sponsors, and user groups. Nice to have and opportunities to learn: Experience working in an agile development team or on open-source development projects. Experience with optimization, algorithms or related quantitative processes. Experience with Cloud platforms like Microsoft Azure, AWS, Google Cloud Experience with DevOps and tools like Azure DevOps Experience with AI-related projects/products or experience working in an AI research environment. A degree, certifications or opensource track record that shows you have a mastery of software engineering principles. Our benefits . Our hybrid work model BlackRock s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. . This mission would not be possible without our smartest investment the one we make in our employees. It s why we re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com / company / blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law.
Posted 1 week ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Role : Data Scientist II About Media.net : Media.net is a leading, global ad tech company that focuses on creating the most transparent and efficient path for advertiser budgets to become publisher revenue. Our proprietary contextual technology is at the forefront of enhancing Programmatic buying, the latest industry-standard in ad buying for digital platforms. The Media.net platform powers major global publishers and ad-tech businesses at scale across ad formats like display, video, mobile, native, as well as search. Media.net’s U.S. HQ is based in New York, and the Global HQ is in Dubai. With office locations and consultant partners across the world, Media.net takes pride in the value-add it offers to its 50+ demand and 21K+ publisher partners, in terms of both products and services. Data Science is at the heart of Media.net. The team uses advanced statistical and machine learning and deep learning models, large scale distributed computing along with tools from mathematics, economics, auction theory to build solutions that enable us to match users with relevant ads in the most optimal way thereby maximizing revenue for our customers and for Media.net. Some of the challenges the team deals with: How do you use information retrieval, machine learning models to estimate click through rate and revenue given the information regarding the position of the slot, user device, location and content of the page. How do you scale the same for thousands of domains, millions of urls? How do you match ads to page views considering contextual information? How do you design learning mechanisms to continuously learn from user feedback in the form of clicks and conversions? How do you deal with the extremely sparse data? What do you do for new ads and new pages? How do we design better explore-exploit frameworks? How do you design learning algorithms that are fast and scalable? How do you combine contextual targeting with behavioral user-based targeting? How do you establish a unique user identity based on multiple noisy signals so that behavioral targeting is accurate? Can you use NLP to find more genetic trends based on the content of the page and as? What is in it for you? Understand business requirements, analyze and extract relevant information from large amounts of historical data. Use your knowledge of Information retrieval, NLP, Machine Learning (including Deep Learning) to build prototype solutions keeping scale, speed and accuracy in mind. Work with engineering teams to implement the prototype. Work with engineers to design appropriate model performance metrics and create reports to track the same. Work with the engineering teams to identify areas of improvement, jointly develop research agenda and execute on the same using cutting edge algorithms and tools. You will need to understand a broad range of ML algorithms and appreciation on how to apply them to complex practical problems. You will also need to have enough theoretical background and a good grasp of algorithms to be able to critically evaluate existing ML algorithms and be creative when there is a need to go beyond textbook solutions. Who should apply for this role ? PhD/Research Degree or BS/MS in Computer Science, Statistics, Artificial Intelligence, Machine learning, Operations Research or related field. 2- 4 years of experience in building Machine Learning/AI/Information Retrieval models Extensive knowledge and practical experience in machine learning, data mining, artificial intelligence, statistics. Understanding of supervised and unsupervised algorithms including but not limited to linear models, decision trees, random forests, gradient boosting machines etc. Excellent analytical and problem-solving abilities. Good knowledge of scientific programming in Python. Experience with Apache Spark is desired. Excellent verbal & written communication skills Bonus Points: Publications or presentation in recognized Machine Learning and Data Mining journals/conferences such as ICML Knowledge in several of the following: Math/math modeling, decision theory, fuzzy logic, Bayesian techniques, optimization techniques, statistical analysis of data, information retrieval, natural language processing, large scale data processing and data mining Ability deal with ambiguity & break them down into research problems Strong theoretical and research acumen
Posted 1 week ago
5.0 - 10.0 years
35 - 40 Lacs
Pune
Work from Office
SailPoint is the leader in identity security for the cloud enterprise. Our identity security solutions secure and enable thousands of companies worldwide, giving our customers unmatched visibility into the entirety of their digital workforce, ensuring workers have the right access to do their job no more, no less. Want to be on a team that full of results-driven individuals who are constantly seeking to innovate Want to make an impact At SailPoint, our Data Platform team does just that. SailPoint is seeking a Senior Data Engineer to help build robust data ingestion and processing system to power our data platform. We are looking for well-rounded engineers who are passionate about building and delivering reliable, scalable data pipelines. This is a unique opportunity to build something from scratch but have the backing of an organization that has the muscle to take it to market quickly, with a very satisfied customer base. Responsibilities: Spearhead the design and implementation of ELT processes, especially focused on extracting data from and loading data into various endpoints, including RDBMS, NoSQL databases and data-warehouses. Develop and maintain scalable data pipelines for both stream and batch processing leveraging JVM based languages and frameworks. Collaborate with cross-functional teams to understand diverse data sources and environment contexts, ensuring seamless integration into our data ecosystem. Utilize AWS service-stack wherever possible to implement lean design solutions for data storage, data integration and data streaming problems. Develop and maintain workflow orchestration using tools like Apache Airflow. Stay abreast of emerging technologies in the data engineering space, proactively incorporating them into our ETL processes. Thrive in an environment with ambiguity, demonstrating adaptability and problem-solving skills. Qualifications: BS in computer science or a related field. 5+ years of experience in data engineering or related field. Demonstrated system-design experience orchestrating ELT processes targeting data Hands-on experience with at least one streaming or batch processing framework, such as Flink or Spark. Hands-on experience with containerization platforms such as Docker and container orchestration tools like Kubernetes. Proficiency in AWS service stack. Familiarity with workflow orchestration tools such as Airflow. Experience with DBT, Kafka, Jenkins and Snowflake. Experience leveraging tools such as Kustomize, Helm and Terraform for implementing infrastructure as code. Strong interest in staying ahead of new technologies in the data engineering space. Comfortable working in ambiguous team-situations, showcasing adaptability and drive in solving novel problems in the data-engineering space. Preferred Experience with AWS Experience with CICD Experience instrumenting code for gathering production performance metrics Experience in working with a Data Catalog tool ( Ex: Atlan / Alation )
Posted 1 week ago
3.0 - 10.0 years
12 - 14 Lacs
Bengaluru
Work from Office
Message to applicants applying to work in the U.S. and/or Canada: When available, the salary range posted for this position reflects the projected hiring range for new hire, full-time salaries in U.S. and/or Canada locations, not including equity or benefits. For non-sales roles the hiring ranges reflect base salary only; employees are also eligible to receive annual bonuses. Hiring ranges for sales positions include base and incentive compensation target. Individual pay is determined by the candidates hiring location and additional factors, including but not limited to skillset, experience, and relevant education, certifications, or training. Applicants may not be eligible for the full salary range based on their U.S. or Canada hiring location. The recruiter can share more details about compensation for the role in your location during the hiring process. U.S. employees have access to quality medical, dental and vision insurance, a 401(k) plan with a Cisco matching contribution, short and long-term disability coverage, basic life insurance and numerous wellbeing offerings. Employees receive up to twelve paid holidays per calendar year, which includes one floating holiday (for non-exempt employees), plus a day off for their birthday. Non-Exempt new hires accrue up to 16 days of vacation time off each year, at a rate of 4.92 hours per pay period. Exempt new hires participate in Cisco s flexible Vacation Time Off policy, which does not place a defined limit on how much vacation time eligible employees may use, but is subject to availability and some business limitations. All new hires are eligible for Sick Time Off subject to Cisco s Sick Time Off Policy and will have eighty (80) hours of sick time off provided on their hire date and on January 1st of each year thereafter. Up to 80 hours of unused sick time will be carried forward from one calendar year to the next such that the maximum number of sick time hours an employee may have available is 160 hours. Employees in Illinois have a unique time off program designed specifically with local requirements in mind. All employees also have access to paid time away to deal with critical or emergency issues. We offer additional paid time to volunteer and give back to the community. Employees on sales plans earn performance-based incentive pay on top of their base salary, which is split between quota and non-quota components. For quota-based incentive pay, Cisco typically pays as follows: .75% of incentive target for each 1% of revenue attainment up to 50% of quota; 1.5% of incentive target for each 1% of attainment between 50% and 75%; 1% of incentive target for each 1% of attainment between 75% and 100%; and once performance exceeds 100% attainment, incentive rates are at or above 1% for each 1% of attainment with no cap on incentive compensation. For non-quota-based sales performance elements such as strategic sales objectives, Cisco may pay up to 125% of target. Cisco sales plans do not have a minimum threshold of performance for sales incentive compensation to be paid.
Posted 1 week ago
3.0 - 7.0 years
13 - 14 Lacs
Gurugram
Work from Office
Responsibilities Work with development teams and product managers to ideate software solutions Design client-side and server-side architecture Build the front-end of applications through appealing visual design Develop and manage well-functioning databases and applications Write effective APIs Test software to ensure responsiveness and efficiency Troubleshoot, debug and upgrade software Exposure to security aspects of the application Build features and applications with a mobile responsive design Write technical documentation Work with data scientists and analysts to improve software Requirements and skills Proven experience as a Full Stack Developer or similar role Experience developing desktop and mobile applications Familiarity with common stacks Knowledge of multiple front-end languages and libraries (e.g. HTML/ CSS, JavaScript, XML, jQuery) Knowledge of back-end language (e.g. Java) and JavaScript frameworks (e.g. Angular, React, Node.js) Familiarity with databases (e.g. MS SQL, MySQL, MongoDB), web servers (e.g. Apache) and UI/UX design Exposure to cloud technology and native services S3, Lambda etc. Exposure to streaming / messaging tool like Kafka Exposure to deployment tools, containerization & Kubernetes. Exposure to ELK Stack / Splunk. Excellent communication and teamwork skills Great attention to detail Organizational skills An analytical mind Degree in Computer Science Work Environment This is a Hybrid Work Role where you are expected to be in office three days a week We offer you a competitive total rewards package, continuing education & training, and tremendous potential with a growing worldwide organization. ."
Posted 1 week ago
4.0 - 9.0 years
12 - 17 Lacs
Pune
Work from Office
Req ID: 322124 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Business Consulting-Technical Consultant to join our team in Pune, Mah r shtra (IN-MH), India (IN). Required Qualifications: Mandatory 4+ years of hands-on experience in design and development of RESTful APIs and Microservices using technology stack: Java/J2EE, Spring framework, Spring Batch, AWS Elastic Kubernetes Services (EKS), RDS Oracle DB, Apigee/API Gateway. 5+ Years experience & expertise in frontend development using React JS, HTML5, CSS3 and Responsive web application development. Must have experience in Rest API integrations Experience in API layer security (e.g., JWT, OATH2), API logging, API testing, creating REST API documentation using Swagger and YAML or similar tools desirable Experience in TDD, writing unit test cases in JUnit. Unit Test Frameworks: Mockito (Java), Junit (Java); Nice to have exposure to End-to-end Test Frameworks: Fitnesse/Test API, Protractor; Functional Testing: Cucumber; Performance Test Tools: JMeter Proficient in SQL and Stored Procedures such as in RDS Oracle DB Experience with Unix, Linux Operating Systems preferably on AWS environment. Experience with Scrum and other Agile processes. Knowledge of Jira, Git/SVN, Jenkins, DevOps, CI/CD Spring framework (4.x) Hibernate ORM 4.x Database MS SQL Server SQL Versioning tool - flyway Apache Active MQ PDF generation libraries - iText, flying saucer, html, CSS (for pdf 1.x) Build tools - Maven, Jenkins UI - Understanding of Core JavaScript is needed other JavaScript frameworks can be learned and these are the frameworks and libraries Angular or React typescript, react, redux, rx js Loadash Gulp or webpack About NTT DATA We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com Whenever possible, we hire locally to NTT DATA offices or client sites. This ensures we can provide timely and effective support tailored to each client s needs. While many positions offer remote or hybrid work options, these arrangements are subject to change based on client requirements. For employees near an NTT DATA office or client site, in-office attendance may be required for meetings or events, depending on business needs. At NTT DATA, we are committed to staying flexible and meeting the evolving needs of both our clients and employees. NTT DATA recruiters will never ask for payment or banking information and will only use @nttdata.com and @talent.nttdataservices.com email addresses. If you are requested to provide payment or disclose banking information, please submit a contact us form, https: / / us.nttdata.com / en / contact-us . NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https: / / us.nttdata.com / en / contact-us .
Posted 1 week ago
7.0 - 9.0 years
13 - 18 Lacs
Noida
Work from Office
About the company: CoreOps.AI is a new age company founded by highly experienced leaders from the technology industry with a vision to be the most compelling technology company that modernizes enterprise core systems and operations. Website : https://coreops.ai CoreOps is building the AI operating system for enterprises - accelerating modernization by 50% and cutting costs by 25% through intelligent automation, data orchestration, and legacy transformation. At CoreOps.AI, we believe in the quiet power of transformation like the dandelion that seeds change wherever it lands. Inspired by this symbol of resilience and growth, our enterprise AI solutions are designed to take root seamlessly, enrich core operations, and spark innovation across your business.Founded by industry veterans with deep B2B expertise and a track record of scaling global tech businesses, CoreOps.AI brings the power of agentic AI to modernize legacy systems, accelerate digital transformation, and shape the future of intelligent enterprises. Key Responsibilities: Work with development teams and product managers to ideate software solutions Design client-side and server-side architecture Build the front-end of applications through appealing visual design Develop and manage well-functioning databases and applications Write effective APIs Test software to ensure responsiveness and efficiency Troubleshoot, debug and upgrade software Create security and data protection settings Build features and applications with a mobile responsive design Write technical documentation Work with AIML Engineers, data scientists and analysts to improve software Collaborate with cross-functional teams, including SAP & Machine learning team, customers to define requirements and implement AI solutions Establish and enforce data governance standards & implement best practices for data privacy and protection of applications Qualifications and Education Requirements: Bachelor s/Master s degree in computer science, data science, mathematics or a related field. At least 8-12 years experience in building Al/ML applications Preferred Skills: Very good understanding of Agile project methodologies Experience working of multiple front-end languages and libraries (e.g. HTML/ CSS, JavaScript, XML, JSON, jQuery, Bootstrap) Experience working of multiple back-end languages (e.g. Python, J2EE) and JavaScript frameworks (e.g. Angular, React, Node.js) Worked on databases (e.g. MySQL, SQLServer, MongoDB), application servers (e.g. Django, JBoss, Apache) and UI/UX design Has lead the team for atleast 4 years. Has worked on providing end to end solution Great communication and collaboration skills Self-starter, Entrepreneurial, result oriented mindset Excellent communication, negotiation, and interpersonal skills,
Posted 1 week ago
4.0 - 9.0 years
11 - 15 Lacs
Kochi, Chennai, Thiruvananthapuram
Work from Office
Job Description Develop ML and Deep Learning Solutions Create predictive, classification, and optimization models using supervised, unsupervised, and reinforcement learning. Design and Implement Generative AI Systems Build and fine-tune Large Language Models (LLMs) and diffusion models for a range of use cases involving text, code, and multi-modal data. Build Scalable Recommendation Engines Develop and optimize recommendation systems using collaborative filtering, content-based filtering, hybrid models, or sequential models. Cloud-Native ML Engineering Deploy and manage machine learning pipelines and APIs in cloud environments (AWS, GCP, Azure),ensuring scalability, observability, and cost efficiency. End-to-End ML Lifecycle Ownership Own the model lifecycle from feature engineering and experimentation to deployment, CI/CD, monitoring, and iteration. Collaborate Across Functions Work with cross-functional teams including product managers, engineers, and data scientists to translate business goals into AI-powered solutions. Core Requirements Minimum 4 years of experience in building and deploying production-level ML/DL systems Develop proofs of concept AI/ML-based solutions and services and demonstrate them to Business stakeholders Design and deliver ML architecture patterns operable in native and hybrid cloud architectures. Create Functional and technical specifications for AI/ML solutions. Implement machine learning algorithms in services and pipelines that can be used on a web scale. Design, develop, and implement Generative AI models using state-of-the-art techniques. Collaborate with cross-functional teams to define project goals, research requirements, and develop innovative solutions. Strong understanding of transformer architectures and deep learning model design Hands-on experience in building at least one production-grade Deep learning solution or Generative AI solution Expertise in Python programming with a focus on: Code optimization and profiling Multi-threading and multiprocessing Object-oriented programming and design principles Experience deploying models in a cloud-native environment with strong MLOps practices Understanding of model evaluation, observability, A/B testing, and feedback loops Excellent problem-solving and analytical skills. Strong communication and presentation skills. Technical Stack Languages: Excellent understanding of object-oriented concepts and Python. ML/DL Frameworks: Machine Learning: Scikit-learn, XGBoost, LightGBM Deep Learning: PyTorch, TensorFlow, Keras GenAI: Hugging Face Transformers, LangChain, OpenAI APIs, Gemini, Agentic Framework Cloud MLOps: AWS SageMaker, GCP Vertex AI, Azure ML, MLflow, Kubeflow, Docker, Kubernetes Data Compute: Apache Spark, BigQuery, RabbitMQ, S3, EC2 Embedding Stores Vector Search: FAISS, Pinecone, ChromaDB Orchestration APIs: Apache Airflow, Docker Deployment, and FastAPI are a must Preferred Qualifications Experience with multi-modal models combining text, vision, or audio Familiarity with Retrieval-Augmented Generation (RAG), embedding stores, and agent-based orchestration (LangGraph, ReAct) Open-source contributions in AI/ML or GenAI ecosystems Certifications in cloud-based machine learning platforms (AWS/GCP/Azure) Education B.Tech / M.Tech / Ph.D. in Computer Science, Artificial Intelligence, Data Science, or a related field Equivalent practical experience in developing scalable AI solutions will also be considered What Company Offers: Excellent career growth opportunities and exposure to multiple technologies. Fixed weekday day schedule, meaning, you ll have your weekends off! Family Medical Insurance. Unique leave benefits and encashment options based on performance. Long term growth opportunities. Fun family environment surrounded by experienced developers. Various internal employee rewards programs based on Performance. Opportunities for various other Bonus programs for training hours taken, certifications, special value to business through idea and innovation. Work life Balance flexible work timings, early out Fridays, various social and cultural activities etc. Company Sponsored International Tours.
Posted 1 week ago
4.0 - 6.0 years
14 - 18 Lacs
Gurugram
Work from Office
Experience 4-6 years of professional experience as a backend engineer, primarily working with server-side technologies. Required Skills: Strong expertise in TypeScript and building scalable backend applications using Express (NestJS preferred) Proficient in building and managing Microservices architecture. Experience with ORM, preferably Prisma. Hands-on experience with Apache Kafka for real-time data streaming and messaging. Experience with Google Cloud Platform (GCP) services, including but not limited to Cloud Functions, Cloud Run, Pub/Sub, BigQuery, and Kubernetes Engine. Familiarity with RESTful APIs, database systems (SQL/NoSQL), and performance optimization. Solid understanding of version control systems, particularly Git. Preferred Skills: Knowledge of containerization using Docker. Experience with automated testing frameworks and methodologies. Understanding of monitoring, logging, and observability tools and practices. Responsibilities: Design, develop, and maintain backend services using NestJS within a microservices architecture. Implement robust messaging and event-driven architectures using Kafka. Deploy, manage, and optimize applications and services on Google Cloud Platform. Ensure high performance, scalability, reliability, and security of backend services. Collaborate closely with front-end developers, product managers, and DevOps teams. Write clean, efficient, and maintainable code, adhering to best practices and coding standards. Perform comprehensive testing and debugging, addressing production issues promptly.
Posted 1 week ago
25.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About RedPin At Redpin we simplify life's most important payments. Buying a new property overseas can be a stressful time, especially when it comes to moving your money. Through our Currencies Direct and TorFX brands we've been helping people do just that for over 25 years. With recent investment we're now on a mission to build a new range of digital products and services that will make moving money Internationally for Real Estate purchases even easier. We're on a mission to become the solution for Real Estate payments everywhere . To do this, we are transitioning our business from a horizontal FX platform to a verticalized, embedded software company, as we look to the future and Redpin 2.0. About the Role At Redpin, we're passionate about building software that solves problems. We count on our site reliability engineers (SREs) to empower users with a rich feature set, high availability, and stellar performance level to pursue their missions. As we expand customer deployments, we're seeking an experienced SRE to deliver insights from massive-scale data in real time. Specifically, we're searching for someone who has fresh ideas and a unique viewpoint, and who enjoys collaborating with a cross-functional team to develop real-world solutions and positive user experiences for every interaction. What you'll do Run the production environment by monitoring availability and taking a holistic view of system health. Build software and systems to manage platform infrastructure and applications Improve reliability, quality, and time-to-market of our suite of software solutions. Measure and optimize system performance, with an eye toward pushing our capabilities forward, getting ahead of customer needs, and innovating for continual improvement. Provide primary operational support and engineering for multiple large-scale distributed software applications. Design, implement, and maintain highly available and scalable infrastructure and systems on AWS. Gather and analyze metrics from operating systems as well as applications to assist in performance tuning and fault finding. Partner with development teams to improve services through rigorous testing and release procedures. Participate in system design consulting, platform management, and capacity planning. Create sustainable systems and services through automation and uplifts. Balance feature development speed and reliability with well-defined service-level objectives What you'll need Bachelor's degree in computer science, Software Engineering, or a related field. (Master's degree preferred) 4-10 years of experience as a Site Reliability Engineer or in a similar role. Strong knowledge of system architecture, infrastructure design, and best practices. Proficiency in scripting and automation using languages like Python, Bash, or similar technologies. Experience with cloud platforms such as AWS, including infrastructure provisioning and management. Strong understanding of networking principles and protocols. Experience with supporting Java, Spring Boot, Hibernate JPA, Python, React, and .NET technologies Application. Knowledge of API gateway solutions like Kong and Layer 7. Experience working with databases such as Elastic, SQL Server, Postgres SQL. Familiarity with messaging systems like MQ, ActiveMQ, and Kafka. Proficiency in managing servers such as Tomcat, JBoss, Apache, NGINX, and IIS. Experience with containerization using EKS (Elastic Kubernetes Service). Knowledge of CI/CD processes and tools like Jenkins, Artifactory, and Ansible. Proficiency in monitoring tools such as Coralogix, CloudWatch, Zabbix, Grafana, and Prometheus. Strong problem-solving and troubleshooting skills with the ability to analyse and resolve complex technical issues. Excellent communication and collaboration skills to work effectively in a team environment. Strong attention to detail and ability to prioritize and manage multiple tasks simultaneously. Self-motivated and able to work independently with minimal supervision.
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Project Description: Citi is embarking on a multi-year technology initiative in Wealth Tech Banking & Payment Technology Space. In this Journey, we are looking for a highly motivated hands-on senior developer. We are building the platform, which supports various Messaging, API, and Workflow Components for Banking and Payment Services across the bank. Solution will be built from the scratch using latest technologies. The candidate will be a core member of the technology team responsible for implementing projects based on Java, Spring Boot, Kafka using latest technologies. Excellent opportunity to immerse in and learn within the wealth tech banking division and gain exposure to business and technology initiatives targeted to maintain lead position among its competitors. We work in a Hybrid-Agile Environment. The Applications Development Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities. Responsibilities: Individual Contributor - Write good quality code in Java, Sprint Boot (related stack), Angular or any other UI tech stack. Well versed with JUnit, Mockito, Integration Tests and Performance Tests Individual Contributor – Write good quality code in Java Angular JS 16 Well versed with UI/UX Designs, Unit test using Jest Ability to build lower level design, develop components with minimal assistance Ability to effectively interact, collaborate with development team Ability to effectively communicate development progress to the Project Lead Work with developers onshore, offshore and matrix teams to implement a business solution Write user/supported documentation Evaluate and adopt new dev tools, libraries, and approaches to improve delivery quality Perform peer code review of project codebase changes Acts as SME to senior stakeholders and /or other team members Utilize knowledge of applications development procedures and concepts, and basic knowledge of other technical areas to identify and define necessary system enhancements, including using script tools and analyzing/interpreting code Consult with users, clients, and other technology groups on issues, and recommend programming solutions, install, and support customer exposure systems Apply fundamental knowledge of programming languages for design specifications. Analyze applications to identify vulnerabilities and security issues, as well as conduct testing and debugging Serve as advisor or coach to new or lower level analysts Identify problems, analyze information, and make evaluative judgements to recommend and implement solutions Resolve issues by identifying and selecting solutions through the applications of acquired technical experience and guided by precedents Has the ability to operate with a limited level of direct supervision. Can exercise independence of judgement and autonomy. Acts as SME to senior stakeholders and /or other team members. Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Skills Required: 4 years of experience Excellent Knowledge of Spring including Spring Framework, Spring Boot, Spring Security, Spring Web, Spring Data Good Knowledge of: Threading, Collections, Exception Handling, JDBC, Java OOD/OOP Concepts, GoF Design Patterns, MoM and SOA Design Patterns, File I/O, and parsing XML and JSON, delimited files and fixed length files, String matching, parsing, building, working with binary data / byte arrays. Good Knowledge of UI/UX Design and Angular JS and Jest for unit testing Good knowledge of SQL (DB2/Oracle dialect is preferable) Experience working with SOA & Micro-services utilizing REST. Experience with design and implementations of cloud-ready applications and deployment pipelines on large-scale container platform clusters is a plus Experience working in a Continuous Integration and Continuous Delivery environment and familiar with Tekton, Harness, Jenkins, Code Quality, etc. Knowledge in industry standard best practices such as Design Patterns, Coding Standards, Coding modularity, Prototypes etc. Experience in debugging, tuning and optimizing components Understanding of the SDLC lifecycle for Agile methodologies Excellent written and oral communication skills Experience developing application in Financial Services industry is preferred Nice to have experience : Kubernetes and Docker Messaging Systems: IBM MQ, Kafka, RabbitMQ, ActiveMQ, Tibco. etc. Tomcat, Jetty, Apache HTTPD Able to work with build/configure/deploy automation tools, Jenkin, Light Speed. etc Linux Ecosystem Autosys APIm APM Tools: Dynatrace, AppDynamics, etc. Caching Technologies: Redis, Hazelcast, MemCached. etc Qualifications: 4 - 8 years of relevant experience in the Financial Service industry Intermediate level experience in Applications Development role Consistently demonstrates clear and concise written and verbal communication Demonstrated problem-solving and decision-making skills Ability to work under pressure and manage deadlines or unexpected changes in expectations or requirements Education: Bachelor’s degree/University degree or equivalent experience ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 1 week ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Your Role and Impact We are seeking a skilled Data Engineer to lead the migration from Hive Catalog to Databricks Unity Catalog on Azure. The Data Engineer will own the end-to-end migration of metadata and access controls from Hive Catalog to Unity Catalog within the Azure cloud environment. The role demands strong expertise in data cataloging, metadata management, Azure cloud infrastructure, and security best practices. Your Contribution Analyze the existing Hive Catalog metadata, schema, and security configurations. Design and execute a robust migration plan to Unity Catalog with minimal disruption and data integrity assurance. Collaborate with Data Governance, Security, and Cloud Infrastructure teams to implement access controls and policies leveraging Azure Active Directory (AAD). Develop automation scripts and tools to support migration, validation, and ongoing management. Troubleshoot migration challenges and provide post-migration support. Document migration processes and train stakeholders on Unity Catalog capabilities. Integrate Unity Catalog with Azure native services such as Azure Data Lake Storage Gen2, Azure Key Vault, and Azure Active Directory for security and identity management. Optimize Azure resource utilization during migration and production workloads. Keep current with Azure Databricks Unity Catalog enhancements and Azure cloud best practices. trong knowledge of metadata management, data governance frameworks, and data cataloging. Proficient in SQL, Python, and scripting for automation. Hands-on experience with Azure Databricks, Apache Spark, and Azure cloud services including Azure Data Lake Storage Gen2, Azure Key Vault, and Azure Active Directory. In-depth understanding of Azure cloud infrastructure: compute (VMs, Azure Databricks clusters), storage, networking, and security components. Experience integrating data catalog solutions with Azure identity and access management (Azure AD, RBAC). Strong grasp of data security, IAM policies, and access control in Azure environments. Excellent analytical, problem-solving, and communication skills.
Posted 1 week ago
3.0 years
0 Lacs
Gurgaon, Haryana, India
Remote
About This Role BlackRock Company Overview: BlackRock is a global leader in investment management, risk management, and advisory services for institutional and retail clients. We help clients achieve their goals and overcome challenges with a range of products, including separate accounts, mutual funds, iShares® (exchange-traded funds), and other pooled investment vehicles. We also offer risk management, advisory, and enterprise investment system services to a broad base of institutional investors through BlackRock Solutions®. Headquartered in New York City, as of February 5, 2025, we handle approximately $11.5 trillion in assets under management (AUM) and have around 19,000 employees in offices across 38 countries, with a significant presence in key global markets, including North and South America, Europe, Asia, Australia, the Middle East, and Africa. Aladdin Data When BlackRock was founded in 1988, the goal was to combine financial services with innovative technology. Today, BlackRock is a leading FinTech platform for investment management and technology services globally. Data is central to the Aladdin platform, differentiating us through our ability to consume, store, analyze, and gain insights from it. The Aladdin Data team maintains a pioneering data platform that delivers high[1]quality data to users, including investors, operations staff, data scientists, and engineers. Our aim is to provide consistent, high-quality data while evolving our platform to support the firm's growth. We build high-performance data pipelines, enable data discovery and consumption, and continually enhance our data storage capabilities. Studio Self-service Front-end Engineering Our team develops full-stack web applications for vendor data self-service, client data configuration, pipelines, and workflows. We support over a thousand internal users and hundreds of clients. We manage the data toolkit, including client-facing data requests, modeling, configuration management, ETL tools, CRUD applications, customized workflows, and back-end APIs to deliver exceptional client and user experiences with intuitive tools and excellent UX. Job Description And Responsibilities Design, build, and maintain various front-end and corresponding back-end platform components, working with Product and Program Managers. Implement new user interfaces and business functionalities to meet evolving business and customer requirements, working with end users, with clear and concise documentation. Analyze and improve the performance of applications and related operational workflows to improve efficiency and throughput. Diagnose, research, and resolve software defects. Ensure software stability through documentation, code reviews, regression, unit, and user acceptance testing for smooth production operations. Lead all aspects of level 2 & 3 application support, ensuring smooth operation of existing processes and meeting new business opportunities. Be a self-starter and work with minimal direction in a globally distributed team. Role Essentials A passion for engineering highly available, performant full-stack applications with a "Student of Markets and Technology" attitude. Bachelor's or master's degree or equivalent experience in computer science or engineering. 3+ years of professional experience working in teams. Experience in full-stack user-facing application development using web technologies (Angular, React, JavaScript) and Java-based REST API (Spring framework). Experience in testing frameworks such as Protractor, TestCafe, Jest. Knowledge in relational database development and at least one NoSQL Database (e.g., Apache Cassandra, MongoDB, etc.). Knowledge of software development methodologies (analysis, design, development, testing) and a basic understanding of Agile/Scrum methodology and practices. Our Benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about. Our hybrid work model BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. About BlackRock At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress. This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com/company/blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39928 Jobs | Dublin
Wipro
19400 Jobs | Bengaluru
Accenture in India
15955 Jobs | Dublin 2
EY
15128 Jobs | London
Uplers
11280 Jobs | Ahmedabad
Amazon
10521 Jobs | Seattle,WA
Oracle
9339 Jobs | Redwood City
IBM
9274 Jobs | Armonk
Accenture services Pvt Ltd
7978 Jobs |
Capgemini
7754 Jobs | Paris,France