Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrow—people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description JOB SUMMARY This position participates in the design, build, test, and delivery of Machine Learning (ML) models and software components that solve challenging business problems for the organization, working in collaboration with the Business, Product, Architecture, Engineering, and Data Science teams. This position engages in assessment and analysis of data sources of structured and unstructured data (internal and external) to uncover opportunities for ML and Artificial Intelligence (AI) automation, predictive methods, and quantitative modeling across the organization. This position establishes and configures scalable and cost-effective end to end solution design pattern components to support prediction model transactions. This position designs trials and tests to measure the success of software and systems, and works with teams, or individually, to implement ML/AI models for production scale. Responsibilities The MLOPS developer works on maintaining existing models that are supporting applications such as the digital insurance application and claims recommendation engine. They will be responsible for setting up cloud monitoring jobs, performing quality assurance and testing for edge cases to ensure the ML product works within the application. They are also going to need to be on call on weekends to bring the application back online in case of failure. Studies and transforms data science prototypes into ML systems using appropriate datasets and data representation models. Researches and implements appropriate ML algorithms and tools that creates new systems and processes powered with ML and AI tools and techniques according to business requirements Collaborates with others to deliver ML products and systems for the organization. Designs workflows and analysis tools to streamline the development of new ML models at scale. Creates and evolves ML models and software that enable state-of-the-art intelligent systems using best practices in all aspects of engineering and modelling lifecycles. Extends existing ML libraries and frameworks with the developments in the Data Science and Machine Learning field. Establishes, configures, and supports scalable Cloud components that serve prediction model transactions Integrates data from authoritative internal and external sources to form the foundation of a new Data Product that would deliver insights that supports business outcomes necessary for ML systems. Qualifications Requirements: Ability to code in python/spark with enough knowledge of apache to build apache beam jobs in dataproc to build data transfer jobs. Experience designing and building data-intensive solutions using distributed computing within a multi-line business environment. Familiarity in Machine Learning and Artificial Intelligence frameworks (i.e., Keras, PyTorch), libraries (i.e., scikit-learn), and tools and Cloud-AI technologies that aids in streamlining the development of Machine Learning or AI systems. Experience in establishing and configuring scalable and cost-effective end to end solution design pattern components to support the serving of batch and live streaming prediction model transactions Possesses creative and critical thinking skills. Experience in developing Machine Learning models such as: Classification/Regression Models, NLP models, and Deep Learning models; with a focus on productionizing those models into product features. Experience with scalable data processing, feature development, and model optimization. Solid understanding of statistics such as forecasting, time series, hypothesis testing, classification, clustering or regression analysis, and how to apply that knowledge in understanding and evaluating Machine Learning models. Knowledgeable in software development lifecycle (SDLM), Agile development practices and cloud technology infrastructures and patterns related to product development Advanced math skills in Linear Algebra, Bayesian Statistics, Group Theory. Works collaboratively, both in a technical and cross-functional context. Strong written and verbal communication. Bachelors’ (BS/BA) degree in a quantitative field of mathematics, computer science, physics, economics, engineering, statistics (operations research, quantitative social science, etc.), international equivalent, or equivalent job experience. Employee Type Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation.
Posted 7 hours ago
3.0 - 8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Notice period 30 days to immediate Role description Myrefers GCP PythonApache beamp3 to 8 years of overall IT experience which includes hands on experience in Big Data technologies Mandatory Hands on experience in Python and PySpark Python as a language is practically usable for anything we are looking for application Development and Extract Transform Load and Data lake curation experience using Python Build pySpark applications using Spark Dataframes in Python using Jupyter notebook and PyCharm IDE Worked on optimizing spark jobs that processes huge volumes of data Hands on experience in version control tools like Git Worked on Amazons Analytics services like Amazon EMR Amazon Athena AWS Glue Worked on Amazons Compute services like Amazon Lambda Amazon EC2 and Amazons Storage service like S3 and few other services like SNS Experience knowledge of bash shell scripting will be a plus Has built ETL processes to take data copy it structurally transform it etc involving a wide variety of formats like CSV TSV XML and JSON Experience in working with fixed width delimited multi record file formats etc Good to have knowledge of datawarehousing concepts dimensions facts schemas snowflake star etc Have worked with columnar storage formats Parquet Avro ORC etc Well versed with compression techniques Snappy Gzip Good to have knowledge of AWS databases atleast one Aurora RDS Redshift ElastiCache DynamoDB Skills Mandatory Skills :GCP, Apache Spark,Python,SparkSQL,Big Data Hadoop Ecosystem
Posted 7 hours ago
7.0 - 9.0 years
6 - 8 Lacs
Hyderābād
On-site
General information Country India State Telangana City Hyderabad Job ID 45479 Department Development Description & Requirements Senior Java Developer is responsible for architecting and developing advanced Java solutions. This role involves leading the design and implementation of microservice architectures with Spring Boot, optimizing services for performance and scalability, and ensuring code quality. The Senior Developer will also mentor junior developers and collaborate closely with cross-functional teams to deliver comprehensive technical solutions. Essential Duties: Lead the development of scalable, robust, and secure Java components and services. Architect and optimize microservice solutions using Spring Boot. Translate customer requirements into comprehensive technical solutions. Conduct code reviews and maintain high code quality standards. Optimize and scale microservices for performance and reliability. Collaborate effectively with cross-functional teams to innovate and develop solutions. Experience in leading projects and mentoring engineers in best practices and innovative solutions. Coordinate with customer and client-facing teams for effective solution delivery. Basic Qualifications: Bachelor’s degree in Computer Science or a related field. 7-9 years of experience in Java development. Expertise in designing and implementing Microservices with Spring Boot. Extensive experience in applying design patterns, system design principles, and expertise in event-driven and domain-driven design methodologies. Extensive experience with multithreading, asynchronous and defensive programming. Proficiency in MongoDB, SQL databases, and S3 data storage. Experience with Kafka, Kubernetes, AWS services & AWS SDK. Hands-on experience with Apache Spark. Strong knowledge of Linux, Git, and Docker. Familiarity with Agile methodologies and tools like Jira and Confluence. Excellent communication and leadership skills. Preferred Qualifications Experience with Spark using Spring Boot. Familiarity with the C4 Software Architecture Model. Experience using tools like Lucidchart for architecture and flow diagrams. About Infor Infor is a global leader in business cloud software products for companies in industry specific markets. Infor builds complete industry suites in the cloud and efficiently deploys technology that puts the user experience first, leverages data science, and integrates easily into existing systems. Over 60,000 organizations worldwide rely on Infor to help overcome market disruptions and achieve business-wide digital transformation. For more information visit www.infor.com Our Values At Infor, we strive for an environment that is founded on a business philosophy called Principle Based Management™ (PBM™) and eight Guiding Principles: integrity, stewardship & compliance, transformation, principled entrepreneurship, knowledge, humility, respect, self-actualization. Increasing diversity is important to reflect our markets, customers, partners, and communities we serve in now and in the future. We have a relentless commitment to a culture based on PBM. Informed by the principles that allow a free and open society to flourish, PBM™ prepares individuals to innovate, improve, and transform while fostering a healthy, growing organization that creates long-term value for its clients and supporters and fulfillment for its employees. Infor is an Equal Opportunity Employer. We are committed to creating a diverse and inclusive work environment. Infor does not discriminate against candidates or employees because of their sex, race, gender identity, disability, age, sexual orientation, religion, national origin, veteran status, or any other protected status under the law. If you require accommodation or assistance at any time during the application or selection processes, please submit a request by following the directions located in the FAQ section at the bottom of the infor.com/about/careers webpage.
Posted 7 hours ago
12.0 - 16.0 years
2 - 9 Lacs
Hyderābād
On-site
Job description Some careers shine brighter than others If you’re looking for a career that will help you stand out, join HSBC, and fulfil your potential Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further HSBC is one of the largest banking and financial services organizations in the world, with operations in 64 countries and territories We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realize their ambitions We are currently seeking an experienced professional to join our team in the role of Consultant Specialist 12 - 16 years of experience with below requirements and skills: Advanced SQL Development: Write complex SQL queries for data extraction, transformation, and analysis. Optimize SQL queries for performance and scalability. SQL Tuning and Joins: Analyze and improve query performance. Deep understanding of joins, indexing, and query execution plans. GCP BigQuery and GCS: Work with Google BigQuery for data warehousing and analytics. Manage and integrate data using Google Cloud Storage (GCS). Airflow DAG Development: Design, develop, and maintain workflows using Apache Airflow. Write custom DAGs to automate data pipelines and processes. Python Programming: Develop and maintain Python scripts for data processing and automation. Debug and optimize Python code for performance and reliability. Shell Scripting: Write and debug basic shell scripts for automation and system tasks. Continuous Learning: Stay updated with the latest tools and technologies in data engineering. Demonstrate a strong ability and attitude to learn and adapt quickly. Communication: Collaborate effectively with cross-functional teams. Clearly communicate technical concepts to both technical and non-technical stakeholders. Requirements To be successful in this role, you should meet the following requirements: Advanced SQL writing and query optimization. Strong understanding of SQL tuning, joins, and indexing. Hands-on experience with GCP services, especially BigQuery and GCS. Proficiency in Python programming and debugging. Experience with Apache Airflow and DAG development. Basic knowledge of shell scripting. Excellent problem-solving skills and a growth mindset. Strong verbal and written communication skills. Experience with data pipeline orchestration and ETL processes. Familiarity with other GCP services like Dataflow or Pub/Sub. Knowledge of CI/CD pipelines and version control (e.g., Git). You’ll achieve more when you join HSBC www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website Issued by – HSBC Software Development India
Posted 7 hours ago
12.0 years
5 - 9 Lacs
Hyderābād
On-site
Job Description Overview PepsiCo Data BI & Integration Platforms is seeking an experienced Cloud Platform Databricks SME, responsible for overseeing the Platform administration, Security, new NPI tools integration, migrations, platform maintenance and other platform administration activities on Azure/AWS.The ideal candidate will have hands-on experience with Azure/AWS services – Infrastructure as Code (IaC), platform provisioning & administration, cloud network design, cloud security principles and automation. Responsibilities Databricks Subject Matter Expert (SME) plays a pivotal role in admin, security best practices, platform sustain support, new tools adoption, cost optimization, supporting new patterns/design solutions using the Databricks platform. Here’s a breakdown of typical responsibilities: Core Technical Responsibilities Architect and optimize big data pipelines using Apache Spark, Delta Lake, and Databricks-native tools. Design scalable data ingestion and transformation workflows, including batch and streaming (e.g., Kafka, Spark Structured Streaming). Create integration guidelines to configure and integrate Databricks with other existing security tools relevant to data access control. Implement data security and governance using Unity Catalog, access controls, and data classification techniques. Support migration of legacy systems to Databricks on cloud platforms like Azure, AWS, or GCP. Manage cloud platform operations with a focus on FinOps support, optimizing resource utilization, cost visibility, and governance across multi-cloud environments. Collaboration & Advisory Act as a technical advisor to data engineering and analytics teams, guiding best practices and performance tuning. Partner with architects and business stakeholders to align Databricks solutions with enterprise goals. Lead proof-of-concept (PoC) initiatives to demonstrate Databricks capabilities for specific use cases. Strategic & Leadership Contributions Mentor junior engineers and promote knowledge sharing across teams. Contribute to platform adoption strategies, including training, documentation, and internal evangelism. Stay current with Databricks innovations and recommend enhancements to existing architectures. Specialized Expertise (Optional but Valuable) Machine Learning & AI integration using MLflow, AutoML, or custom models. Cost optimization and workload sizing for large-scale data processing. Compliance and audit readiness for regulated industries. Qualifications Bachelor’s degree in computer science. At least 12 years of experience in IT cloud infrastructure, architecture and operations, including security, with at least 5 years in a Platform admin role Strong understanding of data security principles and best practices. Expertise in Databricks platform, security features, Unity Catalog, and data access control mechanisms. Experience with data classification and masking techniques. Strong understanding of cloud cost management, with hands-on experience in usage analytics, budgeting, and cost optimization strategies across multi-cloud platforms. Strong knowledge of cloud architecture, design, and deployment principles and practices, including microservices, serverless, containers, and DevOps. Deep expertise in Azure/AWS big data & analytics technologies, including Databricks, real time data ingestion, data warehouses, serverless ETL, No SQL databases, DevOps, Kubernetes, virtual machines, web/function apps, monitoring and security tools. Deep expertise in Azure/AWS networking and security fundamentals, including network endpoints & network security groups, firewalls, external/internal DNS, load balancers, virtual networks and subnets. Proficient in scripting and automation tools, such as PowerShell, Python, Terraform, and Ansible. Excellent problem-solving, analytical, and communication skills, with the ability to explain complex technical concepts to non-technical audiences. Certifications in Azure/AWS/Databricks platform administration, networking and security are preferred. Strong self-organization, time management and prioritization skills A high level of attention to detail, excellent follow through, and reliability Strong collaboration, teamwork and relationship building skills across multiple levels and functions in the organization Ability to listen, establish rapport, and credibility as a strategic partner vertically within the business unit or function, as well as with leadership and functional teams Strategic thinker focused on business value results that utilize technical solutions Strong communication skills in writing, speaking, and presenting Capable to work effectively in a multi-tasking environment. Fluent in English language.
Posted 7 hours ago
7.0 years
0 Lacs
India
On-site
About Us: MatchMove is a leading embedded finance platform that empowers businesses to embed financial services into their applications. We provide innovative solutions across payments, banking-as-a-service, and spend/send management, enabling our clients to drive growth and enhance customer experiences. Are You The One? As a Technical Lead Engineer - Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to: Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements: At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum. Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale. Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. MatchMove Culture: We cultivate a dynamic and innovative culture that fuels growth, creativity, and collaboration. Our fast-paced fintech environment thrives on adaptability, agility, and open communication. We focus on employee development, supporting continuous learning and growth through training programs, learning on the job and mentorship. We encourage speaking up, sharing ideas, and taking ownership. Embracing diversity, our team spans across Asia, fostering a rich exchange of perspectives and experiences. Together, we harness the power of fintech and e-commerce to make a meaningful impact on people's lives. Personal Data Protection Act: By submitting your application for this job, you are authorizing MatchMove to: collect and use your personal data, and to disclose such data to any third party with whom MatchMove or any of its related corporation has service arrangements, in each case for all purposes in connection with your job application, and employment with MatchMove; and retain your personal data for one year for consideration of future job opportunities (where applicable).
Posted 7 hours ago
3.0 years
15 - 22 Lacs
Gurugram, Haryana, India
Remote
Experience : 3.00 + years Salary : INR 1500000-2200000 / year (based on experience) Expected Notice Period : 30 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: LINEN.Cloud) (*Note: This is a requirement for one of Uplers' client - LINEN.Cloud) What do you need for this opportunity? Must have skills required: Cloud Foundry, Java Web Services, Kafka., RabbitMQ, Spring Boot, Docker, JavaScript, Kubernetes LINEN.Cloud is Looking for: Java Developer Function: Technical Management → Engineering Management, Software Engineering → Backend Development, Full-Stack Development Java, Angular. Microservices, React.js, SQL We are looking for highly skilled developers with experience building web applications and REST APIs in Java. You will collaborate with cross-functional teams to translate business requirements into high-quality, scalable, and maintainable code. The ideal candidate should have a strong foundation in Java development, along with excellent problem-solving skills and a passion for building innovative solutions. Responsibilities: Designing, implementing, and Unit testing Java applications. Aligning application design with business goals. Debugging and resolving technical problems that arise. Recommending changes to the existing Java infrastructure. Ensuring continuous professional self-development. Requirements: Experience developing and testing Java Web Services RESTful (primary), XML, JSON, and supporting integration and enabling access via API calls. Experience with Tomcat, Apache, and similar web server technologies. Hands-on experience working with RabbitMQ and Kafka. Experience with the Spring Boot framework. Hands-on with Angular/Node.js is preferred. Working knowledge of ELK (Elasticsearch, Logstash, and Kibana) or Solr is a big plus. Experience with virtualization like Cloud Foundry (PCF), Kubernetes (PKS), Docker, etc, is a big plus. Agile/Scrum expertise. Experience establishing and enforcing branching and software development processes and deployment via CI/CD. Competencies: Aligning application design with business goals. Debugging and resolving technical problems that arise. Recommending changes to the existing Java infrastructure. Ensuring continuous professional self-development. Team spirit and strong communication skills Customer- and service-oriented, confident appearance in an international environment Very high proficiency in English. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 7 hours ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Acuity Knowledge Partners (Acuity) is a leading provider of bespoke research, analytics and technology solutions to the financial services sector, including asset managers, corporate and investment banks, private equity and venture capital firms, hedge funds and consulting firms. Its global network of over 6,000 analysts and industry experts, combined with proprietary technology, supports more than 600 financial institutions and consulting companies to operate more efficiently and unlock their human capital, driving revenue higher and transforming operations. Acuity is headquartered in London and operates from 10 locations worldwide. The company fosters a diverse, equitable and inclusive work environment, nurturing talent, regardless of race, gender, ethnicity or sexual orientation. Acuity was established as a separate business from Moody’s Corporation in 2019, following its acquisition by Equistone Partners Europe (Equistone). In January 2023, funds advised by global private equity firm Permira acquired a majority stake in the business from Equistone, which remains invested as a minority shareholder. For more information, visit www.acuitykp.com Position Title- Associate Director (Senior Architect – Data) Department-IT Location- Gurgaon/ Bangalore Job Summary The Enterprise Data Architect will enhance the company's strategic use of data by designing, developing, and implementing data models for enterprise applications and systems at conceptual, logical, business area, and application layers. This role advocates data modeling methodologies and best practices. We seek a skilled Data Architect with deep knowledge of data architecture principles, extensive data modeling experience, and the ability to create scalable data solutions. Responsibilities include developing and maintaining enterprise data architecture, ensuring data integrity, interoperability, security, and availability, with a focus on ongoing digital transformation projects. Key Responsibilities Strategy & Planning Develop and deliver long-term strategic goals for data architecture vision and standards in conjunction with data users, department managers, clients, and other key stakeholders. Create short-term tactical solutions to achieve long-term objectives and an overall data management roadmap. Establish processes for governing the identification, collection, and use of corporate metadata; take steps to assure metadata accuracy and validity. Establish methods and procedures for tracking data quality, completeness, redundancy, and improvement. Conduct data capacity planning, life cycle, duration, usage requirements, feasibility studies, and other tasks. Create strategies and plans for data security, backup, disaster recovery, business continuity, and archiving. Ensure that data strategies and architectures are aligned with regulatory compliance. Develop a comprehensive data strategy in collaboration with different stakeholders that aligns with the transformational projects’ goals. Ensure effective data management throughout the project lifecycle. Acquisition & Deployment Ensure the success of enterprise-level application rollouts (e.g. ERP, CRM, HCM, FP&A, etc.) Liaise with vendors and service providers to select the products or services that best meet company goals Operational Management o Assess and determine governance, stewardship, and frameworks for managing data across the organization. o Develop and promote data management methodologies and standards. o Document information products from business processes and create data entities o Create entity relationship diagrams to show the digital thread across the value streams and enterprise o Create data normalization across all systems and data base to ensure there is common definition of data entities across the enterprise o Document enterprise reporting needs develop the data strategy to enable single source of truth for all reporting data o Address the regulatory compliance requirements of each country and ensure our data is secure and compliant o Select and implement the appropriate tools, software, applications, and systems to support data technology goals. o Oversee the mapping of data sources, data movement, interfaces, and analytics, with the goal of ensuring data quality. o Collaborate with project managers and business unit leaders for all projects involving enterprise data. o Address data-related problems regarding systems integration, compatibility, and multiple-platform integration. o Act as a leader and advocate of data management, including coaching, training, and career development to staff. o Develop and implement key components as needed to create testing criteria to guarantee the fidelity and performance of data architecture. o Document the data architecture and environment to maintain a current and accurate view of the larger data picture. o Identify and develop opportunities for data reuse, migration, or retirement. Data Architecture Design: Develop and maintain the enterprise data architecture, including data models, databases, data warehouses, and data lakes. Design and implement scalable, high-performance data solutions that meet business requirements. Data Governance: Establish and enforce data governance policies and procedures as agreed with stakeholders. Maintain data integrity, quality, and security within Finance, HR and other such enterprise systems. Data Migration: Oversee the data migration process from legacy systems to the new systems being put in place. Define & Manage data mappings, cleansing, transformation, and validation to ensure accuracy and completeness. Master Data Management: Devise processes to manage master data (e.g., customer, vendor, product information) to ensure consistency and accuracy across enterprise systems and business processes. Provide data management (create, update and delimit) methods to ensure master data is governed Stakeholder Collaboration: Collaborate with various stakeholders, including business users, other system vendors, and stakeholders to understand data requirements. Ensure the enterprise system meets the organization's data needs. Training and Support: Provide training and support to end-users on data entry, retrieval, and reporting within the candidate enterprise systems. Promote user adoption and proper use of data. 10 Data Quality Assurance: Implement data quality assurance measures to identify and correct data issues. Ensure the Oracle Fusion and other enterprise systems contain reliable and up-to-date information. Reporting and Analytics: Facilitate the development of reporting and analytics capabilities within the Oracle Fusion and other systems Enable data-driven decision-making through robust data analysis. Continuous Improvement: Continuously monitor and improve data processes and the Oracle Fusion and other system's data capabilities. Leverage new technologies for enhanced data management to support evolving business needs. Technology and Tools: Oracle Fusion Cloud Data modeling tools (e.g., ER/Studio, ERwin) ETL tools (e.g., Informatica, Talend, Azure Data Factory) Data Pipelines: Understanding of data pipeline tools like Apache Airflow and AWS Glue. Database management systems: Oracle Database, MySQL, SQL Server, PostgreSQL, MongoDB, Cassandra, Couchbase, Redis, Hadoop, Apache Spark, Amazon RDS, Google BigQuery, Microsoft Azure SQL Database, Neo4j, OrientDB, Memcached) Data governance tools (e.g., Collibra, Informatica Axon, Oracle EDM, Oracle MDM) Reporting and analytics tools (e.g., Oracle Analytics Cloud, Power BI, Tableau, Oracle BIP) Hyperscalers / Cloud platforms (e.g., AWS, Azure) Big Data Technologies such as Hadoop, HDFS, MapReduce, and Spark Cloud Platforms such as Amazon Web Services, including RDS, Redshift, and S3, Microsoft Azure services like Azure SQL Database and Cosmos DB and experience in Google Cloud Platform services such as BigQuery and Cloud Storage. Programming Languages: (e.g. using Java, J2EE, EJB, .NET, WebSphere, etc.) SQL: Strong SQL skills for querying and managing databases. Python: Proficiency in Python for data manipulation and analysis. Java: Knowledge of Java for building data-driven applications. Data Security and Protocols: Understanding of data security protocols and compliance standards. Key Competencies Qualifications: Education: Bachelor’s degree in computer science, Information Technology, or a related field. Master’s degree preferred. Experience: 10+ years overall and at least 7 years of experience in data architecture, data modeling, and database design. Proven experience with data warehousing, data lakes, and big data technologies. Expertise in SQL and experience with NoSQL databases. Experience with cloud platforms (e.g., AWS, Azure) and related data services. Experience with Oracle Fusion or similar ERP systems is highly desirable. Skills: Strong understanding of data governance and data security best practices. Excellent problem-solving and analytical skills. Strong communication and interpersonal skills. Ability to work effectively in a collaborative team environment. Leadership experience with a track record of mentoring and developing team members. Excellent in documentation and presentations. Good knowledge of applicable data privacy practices and laws. Certifications: Relevant certifications (e.g., Certified Data Management Professional, AWS Certified Big Data – Specialty) are a plus. Behavioral A self-starter, an excellent planner and executor and above all, a good team player Excellent communication skills and inter-personal skills are a must Must possess organizational skills, including multi-task capability, priority setting and meeting deadlines Ability to build collaborative relationships and effectively leverage networks to mobilize resources Initiative to learn business domain is highly desirable Likes dynamic and constantly evolving environment and requirements
Posted 7 hours ago
10.0 years
30 - 34 Lacs
India
Remote
**Need Databricks SME *** Location - offshore ( Anywhere from India - Remote ) - Need to work in EST Time (US shift) Need 10+ Years of experience. 5 Must Haves: 1. Data Expertise -- worked in Azure Data Bricks/Pipeline/ Shut Down Clusters--2 or more years' experience 2. Unity Catalog migration -- well versed--done tera form scripting in Dev Ops--coding & understand the code--understanding the logics of the behind the scenes--automate functionality 3. Tera Form Expertise -- code building --- 3 or more years 4. Understanding data mesh architecture -- decoupling applications -- ability to have things run in Parallel -- clear understanding -- 2 plus years of experience Microsoft Azure Cloud Platform 5. Great problem Solver Key Responsibilities: Architect, configure, & optimize Databricks Pipelines for large-scale data processing within an Azure Data Lakehouse environment. Set up & manage Azure infrastructure components including Databricks Workspaces, Azure Containers (AKS/ACI), Storage Accounts, & Networking. Design & implement a monitoring & observability framework using tools like Azure Monitor, Log Analytics, & Prometheus / Grafana. Collaborate with platform & data engineering teams to enable microservices-based architecture for scalable & modular data solutions. Drive automation & CI / CD practices using Terraform, ARM templates, & GitHub Actions/Azure DevOps. Required Skills & Experience: Strong hands - on experience with Azure Databricks, Delta Lake, & Apache Spark. Deep understanding of Azure services: Resource Manager, AKS, ACR, Key Vault, & Networking. Proven experience in microservices architecture & container orchestration. Expertise in infrastructure-as-code, scripting (Python, Bash), & DevOps tooling. Familiarity with data governance, security, & cost optimization in cloud environments. Bonus: Experience with event - driven architectures (Kafka / Event Grid). Knowledge of data mesh principles & distributed data ownership. Interview: Two rounds of interviews (1st with manager & 2nd with the team) Job Type: Full-time Pay: ₹3,000,000.00 - ₹3,400,000.00 per year Schedule: US shift
Posted 7 hours ago
8.0 - 12.0 years
2 - 9 Lacs
Hyderābād
On-site
Job description Some careers shine brighter than others If you’re looking for a career that will help you stand out, join HSBC, and fulfil your potential Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further HSBC is one of the largest banking and financial services organizations in the world, with operations in 64 countries and territories We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realize their ambitions We are currently seeking an experienced professional to join our team in the role of Consultant Specialist 8 - 12 years of experience with below requirements and skills: Advanced SQL Development: Write complex SQL queries for data extraction, transformation, and analysis. Optimize SQL queries for performance and scalability. SQL Tuning and Joins: Analyze and improve query performance. Deep understanding of joins, indexing, and query execution plans. GCP BigQuery and GCS: Work with Google BigQuery for data warehousing and analytics. Manage and integrate data using Google Cloud Storage (GCS). Airflow DAG Development: Design, develop, and maintain workflows using Apache Airflow. Write custom DAGs to automate data pipelines and processes. Python Programming: Develop and maintain Python scripts for data processing and automation. Debug and optimize Python code for performance and reliability. Shell Scripting: Write and debug basic shell scripts for automation and system tasks. Continuous Learning: Stay updated with the latest tools and technologies in data engineering. Demonstrate a strong ability and attitude to learn and adapt quickly. Communication: Collaborate effectively with cross-functional teams. Clearly communicate technical concepts to both technical and non-technical stakeholders. Requirements To be successful in this role, you should meet the following requirements: Advanced SQL writing and query optimization. Strong understanding of SQL tuning, joins, and indexing. Hands-on experience with GCP services, especially BigQuery and GCS. Proficiency in Python programming and debugging. Experience with Apache Airflow and DAG development. Basic knowledge of shell scripting. Excellent problem-solving skills and a growth mindset. Strong verbal and written communication skills. Experience with data pipeline orchestration and ETL processes. Familiarity with other GCP services like Dataflow or Pub/Sub. Knowledge of CI/CD pipelines and version control (e.g., Git). You’ll achieve more when you join HSBC www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website Issued by – HSBC Software Development India
Posted 7 hours ago
2.0 years
1 - 3 Lacs
Cochin
On-site
Linux Systems Engineer : Experience Level Minimum 2 to 4 Year as a Systems Engineer JOB DESCRIPTION Armia Systems Pvt Ltd @ Infopark Kochi , is looking out for a proactive and dedicated Linux Systems Engineer to be a part of our infrastructure management team. SPECIFICATION Excellent Communication Skill in English (Written). Well-versed in different Webservers such as Apache, Nginx. Hands on experience in handling all cPanel services, scripts, files, log locations and debugging all issues in it. Excellent knowledge in Hosting Concepts and Core Linux Excellent knowledge in Hosting Control Panels. Work with virtualization technologies (KVM, VMware, Proxmox) and cloud platforms (AWS, GCP, DigitalOcean) for VM provisioning. Provide Proactive Server Maintenance and Hardening Experienced in Website , Server , VPS migrations through different control panels.. Experience in handling Website-related issues, Spam mitigation, Mail service-related issues, Apache & PHP compilations. Experience in handling MySQL,PHP , Exim troubleshooting. Perform security audits and patch management Expert in Proactive Monitoring and Server Management support. Install, update and upgrade the packages on Operating system (RHEL, Centos, AlmaLinux, Ubuntu & Debian). Monitoring the Servers performance using the popular performance monitoring like Zabbix, Icinga To Provide server support for retail and enterprise customers through different channels Job Type: Full-time Pay: ₹150,000.00 - ₹350,000.00 per year Work Location: In person Speak with the employer +91 8590136417
Posted 7 hours ago
15.0 years
0 Lacs
Bhubaneshwar
On-site
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Python (Programming Language), Apache Airflow Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing innovative solutions, and ensuring that applications are aligned with business objectives. You will engage in problem-solving activities, participate in team meetings, and contribute to the overall success of projects by leveraging your expertise in application development. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Mentor junior team members to enhance their skills and knowledge. - Continuously evaluate and improve application performance and user experience. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Good To Have Skills: Experience with Apache Airflow, Python (Programming Language). - Strong understanding of data integration and ETL processes. - Experience with cloud-based data solutions and architectures. - Familiarity with data governance and management best practices. Additional Information: - The candidate should have minimum 5 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Kolkata office. - A 15 years full time education is required. 15 years full time education
Posted 8 hours ago
8.0 - 10.0 years
6 - 6 Lacs
Noida
On-site
Posted On: 31 Jul 2025 Location: Noida, UP, India Company: Iris Software Why Join Us? Are you inspired to grow your career at one of India’s Top 25 Best Workplaces in IT industry? Do you want to do the best work of your life at one of the fastest growing IT services companies ? Do you aspire to thrive in an award-winning work culture that values your talent and career aspirations ? It’s happening right here at Iris Software. About Iris Software At Iris Software, our vision is to be our client’s most trusted technology partner, and the first choice for the industry’s top professionals to realize their full potential. With over 4,300 associates across India, U.S.A, and Canada, we help our enterprise clients thrive with technology-enabled transformation across financial services, healthcare, transportation & logistics, and professional services. Our work covers complex, mission-critical applications with the latest technologies, such as high-value complex Application & Product Engineering, Data & Analytics, Cloud, DevOps, Data & MLOps, Quality Engineering, and Business Automation. Working at Iris Be valued, be inspired, be your best. At Iris Software, we invest in and create a culture where colleagues feel valued, can explore their potential, and have opportunities to grow. Our employee value proposition (EVP) is about “Being Your Best” – as a professional and person. It is about being challenged by work that inspires us, being empowered to excel and grow in your career, and being part of a culture where talent is valued. We’re a place where everyone can discover and be their best version. Job Description AWS data developers with 8-10 years experience – certified candidates (AWS data engineer associate or AWS solution architect) are preferred Skills required - SQL, AWS Glue, PySpark, Air Flow, CDK, Red shift Good communication skills and can deliver independently Mandatory Competencies Cloud - AWS - Tensorflow on AWS, AWS Glue, AWS EMR, Amazon Data Pipeline, AWS Redshift Beh - Communication Big Data - Big Data - Pyspark Database - Database Programming - SQL Programming Language - Python - Apache Airflow Perks and Benefits for Irisians At Iris Software, we offer world-class benefits designed to support the financial, health and well-being needs of our associates to help achieve harmony between their professional and personal growth. From comprehensive health insurance and competitive salaries to flexible work arrangements and ongoing learning opportunities, we're committed to providing a supportive and rewarding work environment. Join us and experience the difference of working at a company that values its employees' success and happiness.
Posted 8 hours ago
6.0 years
0 Lacs
India
Remote
Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 8 hours ago
3.0 years
15 - 22 Lacs
India
Remote
Experience : 3.00 + years Salary : INR 1500000-2200000 / year (based on experience) Expected Notice Period : 30 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: LINEN.Cloud) (*Note: This is a requirement for one of Uplers' client - LINEN.Cloud) What do you need for this opportunity? Must have skills required: Cloud Foundry, Java Web Services, Kafka., RabbitMQ, Spring Boot, Docker, JavaScript, Kubernetes LINEN.Cloud is Looking for: Java Developer Function: Technical Management → Engineering Management, Software Engineering → Backend Development, Full-Stack Development Java, Angular. Microservices, React.js, SQL We are looking for highly skilled developers with experience building web applications and REST APIs in Java. You will collaborate with cross-functional teams to translate business requirements into high-quality, scalable, and maintainable code. The ideal candidate should have a strong foundation in Java development, along with excellent problem-solving skills and a passion for building innovative solutions. Responsibilities: Designing, implementing, and Unit testing Java applications. Aligning application design with business goals. Debugging and resolving technical problems that arise. Recommending changes to the existing Java infrastructure. Ensuring continuous professional self-development. Requirements: Experience developing and testing Java Web Services RESTful (primary), XML, JSON, and supporting integration and enabling access via API calls. Experience with Tomcat, Apache, and similar web server technologies. Hands-on experience working with RabbitMQ and Kafka. Experience with the Spring Boot framework. Hands-on with Angular/Node.js is preferred. Working knowledge of ELK (Elasticsearch, Logstash, and Kibana) or Solr is a big plus. Experience with virtualization like Cloud Foundry (PCF), Kubernetes (PKS), Docker, etc, is a big plus. Agile/Scrum expertise. Experience establishing and enforcing branching and software development processes and deployment via CI/CD. Competencies: Aligning application design with business goals. Debugging and resolving technical problems that arise. Recommending changes to the existing Java infrastructure. Ensuring continuous professional self-development. Team spirit and strong communication skills Customer- and service-oriented, confident appearance in an international environment Very high proficiency in English. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 8 hours ago
15.0 years
0 Lacs
Indore
On-site
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Apache Spark Good to have skills : MySQL, Python (Programming Language), Google BigQuery Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are optimized for performance and usability. You will also participate in testing and debugging processes to ensure the applications function as intended, while continuously seeking ways to enhance application efficiency and user experience. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Assist in the documentation of application specifications and user guides. - Collaborate with cross-functional teams to gather requirements and provide technical support. Professional & Technical Skills: - Must To Have Skills: Proficiency in Apache Spark. - Good To Have Skills: Experience with MySQL, Python (Programming Language), Google BigQuery. - Strong understanding of data processing frameworks and distributed computing. - Experience in developing and deploying applications in cloud environments. - Familiarity with data integration and ETL processes. Additional Information: - The candidate should have minimum 3 years of experience in Apache Spark. - This position is based at our Indore office. - A 15 years full time education is required. 15 years full time education
Posted 8 hours ago
3.0 years
15 - 22 Lacs
Agra, Uttar Pradesh, India
Remote
Experience : 3.00 + years Salary : INR 1500000-2200000 / year (based on experience) Expected Notice Period : 30 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: LINEN.Cloud) (*Note: This is a requirement for one of Uplers' client - LINEN.Cloud) What do you need for this opportunity? Must have skills required: Cloud Foundry, Java Web Services, Kafka., RabbitMQ, Spring Boot, Docker, JavaScript, Kubernetes LINEN.Cloud is Looking for: Java Developer Function: Technical Management → Engineering Management, Software Engineering → Backend Development, Full-Stack Development Java, Angular. Microservices, React.js, SQL We are looking for highly skilled developers with experience building web applications and REST APIs in Java. You will collaborate with cross-functional teams to translate business requirements into high-quality, scalable, and maintainable code. The ideal candidate should have a strong foundation in Java development, along with excellent problem-solving skills and a passion for building innovative solutions. Responsibilities: Designing, implementing, and Unit testing Java applications. Aligning application design with business goals. Debugging and resolving technical problems that arise. Recommending changes to the existing Java infrastructure. Ensuring continuous professional self-development. Requirements: Experience developing and testing Java Web Services RESTful (primary), XML, JSON, and supporting integration and enabling access via API calls. Experience with Tomcat, Apache, and similar web server technologies. Hands-on experience working with RabbitMQ and Kafka. Experience with the Spring Boot framework. Hands-on with Angular/Node.js is preferred. Working knowledge of ELK (Elasticsearch, Logstash, and Kibana) or Solr is a big plus. Experience with virtualization like Cloud Foundry (PCF), Kubernetes (PKS), Docker, etc, is a big plus. Agile/Scrum expertise. Experience establishing and enforcing branching and software development processes and deployment via CI/CD. Competencies: Aligning application design with business goals. Debugging and resolving technical problems that arise. Recommending changes to the existing Java infrastructure. Ensuring continuous professional self-development. Team spirit and strong communication skills Customer- and service-oriented, confident appearance in an international environment Very high proficiency in English. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 8 hours ago
6.0 years
0 Lacs
Ghaziabad, Uttar Pradesh, India
Remote
Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 8 hours ago
6.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 8 hours ago
6.0 years
0 Lacs
Agra, Uttar Pradesh, India
Remote
Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 8 hours ago
3.0 years
15 - 22 Lacs
Noida, Uttar Pradesh, India
Remote
Experience : 3.00 + years Salary : INR 1500000-2200000 / year (based on experience) Expected Notice Period : 30 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: LINEN.Cloud) (*Note: This is a requirement for one of Uplers' client - LINEN.Cloud) What do you need for this opportunity? Must have skills required: Cloud Foundry, Java Web Services, Kafka., RabbitMQ, Spring Boot, Docker, JavaScript, Kubernetes LINEN.Cloud is Looking for: Java Developer Function: Technical Management → Engineering Management, Software Engineering → Backend Development, Full-Stack Development Java, Angular. Microservices, React.js, SQL We are looking for highly skilled developers with experience building web applications and REST APIs in Java. You will collaborate with cross-functional teams to translate business requirements into high-quality, scalable, and maintainable code. The ideal candidate should have a strong foundation in Java development, along with excellent problem-solving skills and a passion for building innovative solutions. Responsibilities: Designing, implementing, and Unit testing Java applications. Aligning application design with business goals. Debugging and resolving technical problems that arise. Recommending changes to the existing Java infrastructure. Ensuring continuous professional self-development. Requirements: Experience developing and testing Java Web Services RESTful (primary), XML, JSON, and supporting integration and enabling access via API calls. Experience with Tomcat, Apache, and similar web server technologies. Hands-on experience working with RabbitMQ and Kafka. Experience with the Spring Boot framework. Hands-on with Angular/Node.js is preferred. Working knowledge of ELK (Elasticsearch, Logstash, and Kibana) or Solr is a big plus. Experience with virtualization like Cloud Foundry (PCF), Kubernetes (PKS), Docker, etc, is a big plus. Agile/Scrum expertise. Experience establishing and enforcing branching and software development processes and deployment via CI/CD. Competencies: Aligning application design with business goals. Debugging and resolving technical problems that arise. Recommending changes to the existing Java infrastructure. Ensuring continuous professional self-development. Team spirit and strong communication skills Customer- and service-oriented, confident appearance in an international environment Very high proficiency in English. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 8 hours ago
3.0 years
15 - 22 Lacs
Ghaziabad, Uttar Pradesh, India
Remote
Experience : 3.00 + years Salary : INR 1500000-2200000 / year (based on experience) Expected Notice Period : 30 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: LINEN.Cloud) (*Note: This is a requirement for one of Uplers' client - LINEN.Cloud) What do you need for this opportunity? Must have skills required: Cloud Foundry, Java Web Services, Kafka., RabbitMQ, Spring Boot, Docker, JavaScript, Kubernetes LINEN.Cloud is Looking for: Java Developer Function: Technical Management → Engineering Management, Software Engineering → Backend Development, Full-Stack Development Java, Angular. Microservices, React.js, SQL We are looking for highly skilled developers with experience building web applications and REST APIs in Java. You will collaborate with cross-functional teams to translate business requirements into high-quality, scalable, and maintainable code. The ideal candidate should have a strong foundation in Java development, along with excellent problem-solving skills and a passion for building innovative solutions. Responsibilities: Designing, implementing, and Unit testing Java applications. Aligning application design with business goals. Debugging and resolving technical problems that arise. Recommending changes to the existing Java infrastructure. Ensuring continuous professional self-development. Requirements: Experience developing and testing Java Web Services RESTful (primary), XML, JSON, and supporting integration and enabling access via API calls. Experience with Tomcat, Apache, and similar web server technologies. Hands-on experience working with RabbitMQ and Kafka. Experience with the Spring Boot framework. Hands-on with Angular/Node.js is preferred. Working knowledge of ELK (Elasticsearch, Logstash, and Kibana) or Solr is a big plus. Experience with virtualization like Cloud Foundry (PCF), Kubernetes (PKS), Docker, etc, is a big plus. Agile/Scrum expertise. Experience establishing and enforcing branching and software development processes and deployment via CI/CD. Competencies: Aligning application design with business goals. Debugging and resolving technical problems that arise. Recommending changes to the existing Java infrastructure. Ensuring continuous professional self-development. Team spirit and strong communication skills Customer- and service-oriented, confident appearance in an international environment Very high proficiency in English. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 8 hours ago
12.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About The Role OSTTRA India The Role: Technical Architect The Team: The OSTTRA Technology team is composed of Capital Markets Technology professionals, who build, support and protect the applications that operate our network. The technology landscape includes high-performance, high-volume applications as well as compute intensive applications, leveraging contemporary microservices, cloud-based architectures. The Impact: Together, we build, support, protect and manage high-performance, resilient platforms that process more than 100 million messages a day. Our services are vital to automated trade processing around the globe, managing peak volumes and working with our customers and regulators to ensure the efficient settlement of trades and effective operation of global capital markets. What’s in it for you: The current objective is to identify individuals with 12+ years of experience who have high expertise, to join their existing team of experts who are spread across the world. This is your opportunity to start at the beginning and get the advantages of rapid early growth. This role is based out in Gurgaon and expected to work with different teams and colleagues across the globe. Responsibilities The role shall be responsible for establishing, maintaining, socialising, and realising the target state of Product Architecture for Post trade businesses of Osttra. This shall encompass all services that Osttra offers for these businesses and all the systems which enable those services. Looking for a person who is high on energy and motivation. Should feel challenged by difficult problems. The role shall partner with portfolio delivery leads, programme managers, portfolio business leads and horizontal technical architects to frame the strategy, to provide solutions for planned programmes and to guide the roadmaps. He/she shall able to build high level Design and log-level techicnal solutions, considerting factors such as scalablity, performance, security, maintanlibity and cost-effectiveness The role shall own the technical and architectural decisions for the projects & products. He / she shall review the designs and own the design quality. They will ensure that there is a robust code / implementation review practice in the product. Likewise, they shall be responsible for the robust CI / CD and robust DevSecOps engineering pipelines being used in the projects. He / she shall provide the ongoing support on design and architecture problems to the delivery teams The role shall manage the tech debt log and plan for their remediation across deliveries and roadmaps. The role shall maintain the living Architecture Reference Documents for the Products. They shall actively partner with Horizontal Technical Architects to factor tech constructs within their portfolios and also to ensure the vibrant feedback to the technical strategies. They shall be responsible for guiding the L3 / L2 teams when needed in the resolution of the production situations and incidents. They shall be responsible for various define guidelines and system design for DR strategies and BCP plan for the proudcts. They shall be responsible for architecting key mission critical systems components, review designs and help uplift He/ She should performs critical technical review of changes on app or infra on system. The role shall enable an ecosystem such that the functional API, message, data and flow models within the products of the portfolio are well documented. And shall also provide the strong governance / oversight of the same What We’re Looking For Rich domain experience of financial services industry preferably with financial markets within Pre/post trade life cycles or large-scale Buy / Sell / Brokerage organisations Should have experience architecture design for the muitple products and of large-scale change programmes. Should be adept with application development and engineering methods and tools. Should have robust experience with micro services applications and services development and integration. Should be adept with development tools, contemporary runtime, and observability stacks for micro services. Should have experience of modelling for APIs, Messages and may be data. Should have experience of complex migration, which include data migration Should have experience in architecture & design of highly resilient, high availability, high volume applications. Should be able to initiates or contributes to initiatives around reliability & resilience of application Rich experience of architectural patterns like MVC based front end applications, API & Event driven architectures, Event streaming, Message processing/orchestrations, CQRS and possibly Event sourcing etc. Experience of protocols or integration technologies like HTTP, MQ, FTP, REST/API and possibly FIX/SWIFT etc. Experience of messaging formats and paradigms like XSD, XML, XSLT, JSON, REST and possibly gRPC, GraphQL etc. Experience of technology like Kafka, Spark streams, Kubernetes / EKS, API Gateways, Web & Application servers, message queuing infrastructure, data transformation / ETL tools Experience of languages like Java, python; application development frameworks like Spring Boot/Family, Apache family and common place AWS / other cloud provider services. Experience of engineering methods like CI/CD, build deploy automation, infra as code and unit / integration testing methods and tools. Should have appetite to review / code for complex problems and should find interests / energy in doing design discussions and reviews. Experience of development with NoSQL and Relational databases is required. Should have an active/prior experience with MVC web development or with contemporary React/Angular frameworks. Should have an experice of migrating monolithic application to a cloud based solution with understanding of defning domain based services responsibliity. Should have an rich experience of designing cloud-natvie architecture including microservices, serverless computing, containerization( docker, kubernets ) on relevent platforms ( GCP/AWS) and monitoring aspects. The Location: Gurgaon, India About Company Statement OSTTRA is a market leader in derivatives post-trade processing, bringing innovation, expertise, processes and networks together to solve the post-trade challenges of global financial markets. OSTTRA operates cross-asset post-trade processing networks, providing a proven suite of Credit Risk, Trade Workflow and Optimisation services. Together these solutions streamline post-trade workflows, enabling firms to connect to counterparties and utilities, manage credit risk, reduce operational risk and optimise processing to drive post-trade efficiencies. OSTTRA was formed in 2021 through the combination of four businesses that have been at the heart of post trade evolution and innovation for the last 20+ years: MarkitServ, Traiana, TriOptima and Reset. These businesses have an exemplary track record of developing and supporting critical market infrastructure and bring together an established community of market participants comprising all trading relationships and paradigms, connected using powerful integration and transformation capabilities. About OSTTRA Candidates should note that OSTTRA is an independent firm, jointly owned by S&P Global and CME Group. As part of the joint venture, S&P Global provides recruitment services to OSTTRA - however, successful candidates will be interviewed and directly employed by OSTTRA, joining our global team of more than 1,200 post trade experts. OSTTRA was formed in 2021 through the combination of four businesses that have been at the heart of post trade evolution and innovation for the last 20+ years: MarkitServ, Traiana, TriOptima and Reset. OSTTRA is a joint venture, owned 50/50 by S&P Global and CME Group. With an outstanding track record of developing and supporting critical market infrastructure, our combined network connects thousands of market participants to streamline end to end workflows - from trade capture at the point of execution, through portfolio optimization, to clearing and settlement. Joining the OSTTRA team is a unique opportunity to help build a bold new business with an outstanding heritage in financial technology, playing a central role in supporting global financial markets. Learn more at www.osttra.com. What’s In It For You? Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 20 - Professional (EEO-2 Job Categories-United States of America), BSMGMT203 - Entry Professional (EEO Job Group) Job ID: 315820 Posted On: 2025-07-10 Location: Gurgaon, Haryana, India
Posted 8 hours ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Key Responsibilities Designed and developed scalable ETL pipelines using Cloud Functions, Cloud Dataproc (Spark), and BigQuery as the central data warehouse for large-scale batch and transformation workloads. Implemented efficient data modeling techniques in BigQuery (including star/snowflake schemas, partitioning, and clustering) to support high-performance analytics and reduce query costs. Built end-to-end ingestion frameworks leveraging Cloud Pub/Sub and Cloud Functions for real-time and event-driven data capture. Used Apache Airflow (Cloud Composer) for orchestration of complex data workflows and dependency management. Applied Cloud Data Fusion and Datastream selectively for integrating specific sources (e.g., databases and legacy systems) into the pipeline. Developed strong backtracking and troubleshooting workflows to quickly identify data issues, job failures, and pipeline bottlenecks, ensuring consistent data delivery and SLA compliance. Integrated robust monitoring, alerting, and logging to ensure data quality, integrity, and observability. Tech stack GCP: BigQuery, Cloud Functions, Cloud Dataproc (Spark), Pub/Sub, Data Fusion, Datastream Orchestration: Apache Airflow (Cloud Composer) Languages: Python, SQL, PySpark Concepts: Data Modeling, ETL/ELT, Streaming & Batch Processing, Schema Management, Monitoring & Logging Some of the most important data sources: (need to know ingestion technique on these) CRM Systems (cloud-based and internal) Salesforce Teradata MySQL API Other 3rd-party and internal operational systems Skills: etl/elt,cloud data fusion,schema management,sql,pyspark,cloud dataproc (spark),monitoring & logging,data modeling,bigquery,etl,cloud pub/sub,python,gcp,bigquerry,streaming & batch processing,datastream,cloud functions,spark,apache airflow (cloud composer)
Posted 8 hours ago
3.0 years
0 Lacs
India
Remote
We are hiring Codeignitor Back-End Developers to Join Our Team. Send your CV with a Cover Letter to info@clickbydigital.in & clickbydigital@gmail.com or Call on 7482020111 Attachments of previous work details will be a plus point. Work from Home Facility is available Details are mentioned below: Back-End Developer (Knowledge of PHP, Codeignitor, Node.Js is Compulsory) Job brief We are looking for a Back-End Developer to produce scalable software solutions. You’ll be part of a cross-functional team that’s responsible for the full software development life cycle, from conception to deployment. The ideal candidate is a highly resourceful and innovative developer with extensive experience in the layout, design, and coding of Software specifically in PHP, Node.js, Codeigniter. You must also possess a strong knowledge of web application development using Node.js, PHP, Codeignitor, JAVA, JS, C# programming language, and MySQL Server databases. Should be Familiar with CI-CD Deployment, Git. As a Back-End Developer, you should be familiar with both front-end and back-end coding languages, development frameworks, and third-party libraries. You should also be a team player with a knack for visual design and utility. If you’re also familiar with Agile methodologies, we’d like to meet you. Responsibilities · Work with development teams and product managers to ideate software solutions · Design client-side and server-side architecture · Develop and manage well-functioning databases and applications · Write effective APIs in Codeignitor 4 · Test software to ensure responsiveness and efficiency · Troubleshoot, debug, and upgrade software · Create security and data protection settings · Build features and applications with a mobile responsive design · Write technical documentation · Work with data scientists and analysts to improve software Requirements · Proven experience as a Back-End Developer (with a minimum of 3 Years of Working) or similar role · Experience developing desktop and mobile applications · Familiarity with common stacks · Knowledge of multiple front-end languages and libraries (e.g., HTML/ CSS, JavaScript, XML, jQuery) · Knowledge of multiple back-end languages (e.g., C#, Java), PHP Framework (CI 4) and JavaScript frameworks (e.g., Angular, React, Node.js) · Familiarity with databases (e.g., MySQL, MongoDB), web servers (e.g., Apache), and UI/UX design · Excellent communication and teamwork skills · Great attention to detail · Organizational skills · An analytical mind · Degree in Computer Science, Statistics or relevant field Salary Range Rs 4.0 Lacs to 5.5 Lacs In Hand per anum Cheers, ClickByDigital team #hiring #workfromhome #whm #CIdeveloper #Codeignitor #backenddeveloper
Posted 9 hours ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Apache is a widely used software foundation that offers a range of open-source software solutions. In India, the demand for professionals with expertise in Apache tools and technologies is on the rise. Job seekers looking to pursue a career in Apache-related roles have a plethora of opportunities in various industries. Let's delve into the Apache job market in India to gain a better understanding of the landscape.
These cities are known for their thriving IT sectors and see a high demand for Apache professionals across different organizations.
The salary range for Apache professionals in India varies based on experience and skill level. - Entry-level: INR 3-5 lakhs per annum - Mid-level: INR 6-10 lakhs per annum - Experienced: INR 12-20 lakhs per annum
In the Apache job market in India, a typical career path may progress as follows: 1. Junior Developer 2. Developer 3. Senior Developer 4. Tech Lead 5. Architect
Besides expertise in Apache tools and technologies, professionals in this field are often expected to have skills in: - Linux - Networking - Database Management - Cloud Computing
As you embark on your journey to explore Apache jobs in India, it is essential to stay updated on the latest trends and technologies in the field. By honing your skills and preparing thoroughly for interviews, you can position yourself as a competitive candidate in the Apache job market. Stay motivated, keep learning, and pursue your dream career with confidence!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough