Home
Jobs

1775 Versioning Jobs - Page 8

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) 🔧 Primary Skills Python Spark (PySpark) SQL Delta Lake 📌 Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less

Posted 3 days ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) 🔧 Primary Skills Python Spark (PySpark) SQL Delta Lake 📌 Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less

Posted 3 days ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) 🔧 Primary Skills Python Spark (PySpark) SQL Delta Lake 📌 Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less

Posted 3 days ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) 🔧 Primary Skills Python Spark (PySpark) SQL Delta Lake 📌 Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less

Posted 3 days ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) 🔧 Primary Skills Python Spark (PySpark) SQL Delta Lake 📌 Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less

Posted 3 days ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) 🔧 Primary Skills Python Spark (PySpark) SQL Delta Lake 📌 Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less

Posted 3 days ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) 🔧 Primary Skills Python Spark (PySpark) SQL Delta Lake 📌 Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less

Posted 3 days ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) 🔧 Primary Skills Python Spark (PySpark) SQL Delta Lake 📌 Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less

Posted 3 days ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) 🔧 Primary Skills Python Spark (PySpark) SQL Delta Lake 📌 Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less

Posted 3 days ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) 🔧 Primary Skills Python Spark (PySpark) SQL Delta Lake 📌 Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less

Posted 3 days ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) 🔧 Primary Skills Python Spark (PySpark) SQL Delta Lake 📌 Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less

Posted 3 days ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) 🔧 Primary Skills Python Spark (PySpark) SQL Delta Lake 📌 Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less

Posted 3 days ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) 🔧 Primary Skills Python Spark (PySpark) SQL Delta Lake 📌 Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less

Posted 3 days ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) 🔧 Primary Skills Python Spark (PySpark) SQL Delta Lake 📌 Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less

Posted 3 days ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) 🔧 Primary Skills Python Spark (PySpark) SQL Delta Lake 📌 Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less

Posted 3 days ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) 🔧 Primary Skills Python Spark (PySpark) SQL Delta Lake 📌 Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less

Posted 3 days ago

Apply

6.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

About The Role Grade Level (for internal use): 10 Position Summary Our proprietary software-as-a-service helps automotive dealerships and sales teams better understand and predict exactly which customers are ready to buy, the reasons why, and the key offers and incentives most likely to close the sale. Its micro-marketing engine then delivers the right message at the right time to those customers, ensuring higher conversion rates and a stronger ROI. What You'll Do You will be part of our Data Platform & Product Insights data engineering team. As part of this agile team, you will work in our cloud native environment to Build & support data ingestion and processing pipelines in cloud. This will entail extraction, load and transformation of ‘big data’ from a wide variety of sources, both batch & streaming, using latest data frameworks and technologies Partner with product team to assemble large, complex data sets that meet functional and non-functional business requirements, ensure build out of Data Dictionaries/Data Catalogue and detailed documentation and knowledge around these data assets, metrics and KPIs. Warehouse this data, build data marts, data aggregations, metrics, KPIs, business logic that leads to actionable insights into our product efficacy, marketing platform, customer behaviour, retention etc. Build real-time monitoring dashboards and alerting systems. Coach and mentor other team members. Who You Are 6+ years of experience in Big Data and Data Engineering. Strong knowledge of advanced SQL, data warehousing concepts and DataMart designing. Have strong programming skills in SQL, Python/ PySpark etc. Experience in design and development of data pipeline, ETL/ELT process on-premises/cloud. Experience in one of the Cloud providers – GCP, Azure, AWS. Experience with relational SQL and NoSQL databases, including Postgres and MongoDB. Experience workflow management tools: Airflow, AWS data pipeline, Google Cloud Composer etc. Experience with Distributed Versioning Control environments such as GIT, Azure DevOps Building Docker images and fetch/promote and deploy to Production. Integrate Docker container orchestration framework using Kubernetes by creating pods, config Maps, deployments using terraform. Should be able to convert business queries into technical documentation. Strong problem solving and communication skills. Bachelors or an advanced degree in Computer Science or related engineering discipline. Good to have some exposure to Exposure to any Business Intelligence (BI) tools like Tableau, Dundas, Power BI etc. Agile software development methodologies. Working in multi-functional, multi-location teams Grade: 10 Location: Gurugram Hybrid Model: twice a week work from office Shift Time: 12 pm to 9 pm IST What You'll Love About Us – Do ask us about these! Total Rewards. Monetary, beneficial and developmental rewards! Work Life Balance. You can't do a good job if your job is all you do! Prepare for the Future. Academy – we are all learners; we are all teachers! Employee Assistance Program. Confidential and Professional Counselling and Consulting. Diversity & Inclusion. HeForShe! Internal Mobility. Grow with us! About AutomotiveMastermind Who we are: Founded in 2012, automotiveMastermind is a leading provider of predictive analytics and marketing automation solutions for the automotive industry and believes that technology can transform data, revealing key customer insights to accurately predict automotive sales. Through its proprietary automated sales and marketing platform, Mastermind, the company empowers dealers to close more deals by predicting future buyers and consistently marketing to them. automotiveMastermind is headquartered in New York City. For more information, visit automotivemastermind.com. At automotiveMastermind, we thrive on high energy at high speed. We’re an organization in hyper-growth mode and have a fast-paced culture to match. Our highly engaged teams feel passionately about both our product and our people. This passion is what continues to motivate and challenge our teams to be best-in-class. Our cultural values of “Drive” and “Help” have been at the core of what we do, and how we have built our culture through the years. This cultural framework inspires a passion for success while collaborating to win. What We Do Through our proprietary automated sales and marketing platform, Mastermind, we empower dealers to close more deals by predicting future buyers and consistently marketing to them. In short, we help automotive dealerships generate success in their loyalty, service, and conquest portfolios through a combination of turnkey predictive analytics, proactive marketing, and dedicated consultative services. What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Key Responsibilities Edit high-quality short-form videos (Reels, YouTube Shorts, Ads) with transitions, music sync, subtitles, and effects Work with marketing to develop storyboards and content plans for regular shoots Add motion graphics, titles, and CTA animations in line with brand guidelines Repurpose long videos into teasers, behind-the-scenes clips, and explainer cuts Manage storage, footage, and versioning in an organized manner Understand hooks, pace, and trending formats of 2025 social media content Collaborate with graphic designers and content writers to create visual-first storytelling About Company: Established in 2009 as a real estate consultancy firm, The Raksha Group quickly earned a reputation for reliability and a deep understanding of the real estate market. This strong foundation enabled the company to gain the trust of numerous clients and stakeholders. In 2015, the company made a strategic transition from consultancy to construction, marking a significant milestone. This shift allowed Raksha Group to leverage its industry expertise and market insights to develop its own high-quality, innovative residential and commercial projects. Over the past decade, The Raksha Group has distinguished itself in the real estate industry through its unwavering commitment to excellence, sustainable development, and customer satisfaction. Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

Delhi, India

On-site

Linkedin logo

Key Responsibilities Edit high-quality short-form videos (Reels, YouTube Shorts, Ads) with transitions, music sync, subtitles, and effects Work with marketing to develop storyboards and content plans for regular shoots Add motion graphics, titles, and CTA animations in line with brand guidelines Repurpose long videos into teasers, behind-the-scenes clips, and explainer cuts Manage storage, footage, and versioning in an organized manner Understand hooks, pace, and trending formats of 2025 social media content Collaborate with graphic designers and content writers to create visual-first storytelling About Company: Established in 2009 as a real estate consultancy firm, The Raksha Group quickly earned a reputation for reliability and a deep understanding of the real estate market. This strong foundation enabled the company to gain the trust of numerous clients and stakeholders. In 2015, the company made a strategic transition from consultancy to construction, marking a significant milestone. This shift allowed Raksha Group to leverage its industry expertise and market insights to develop its own high-quality, innovative residential and commercial projects. Over the past decade, The Raksha Group has distinguished itself in the real estate industry through its unwavering commitment to excellence, sustainable development, and customer satisfaction. Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Key Responsibilities Edit high-quality short-form videos (Reels, YouTube Shorts, Ads) with transitions, music sync, subtitles, and effects Work with marketing to develop storyboards and content plans for regular shoots Add motion graphics, titles, and CTA animations in line with brand guidelines Repurpose long videos into teasers, behind-the-scenes clips, and explainer cuts Manage storage, footage, and versioning in an organized manner Understand hooks, pace, and trending formats of 2025 social media content Collaborate with graphic designers and content writers to create visual-first storytelling About Company: Established in 2009 as a real estate consultancy firm, The Raksha Group quickly earned a reputation for reliability and a deep understanding of the real estate market. This strong foundation enabled the company to gain the trust of numerous clients and stakeholders. In 2015, the company made a strategic transition from consultancy to construction, marking a significant milestone. This shift allowed Raksha Group to leverage its industry expertise and market insights to develop its own high-quality, innovative residential and commercial projects. Over the past decade, The Raksha Group has distinguished itself in the real estate industry through its unwavering commitment to excellence, sustainable development, and customer satisfaction. Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

Bengaluru North, Karnataka, India

On-site

Linkedin logo

Job Description GalaxEye Space, is a deep-tech Space start-up spun off from IIT-Madras and is currently based in Bengaluru, Karnataka. We are dedicated to advancing the frontiers of space exploration. Our mission is to develop cutting-edge solutions that address the challenges of the modern space industry by specialising in developing a constellation of miniaturised, multi-sensor SAR+EO satellites. Our new age technology enables all-time, all-weather imaging, this with leveraging advanced processing and AI capabilities, we ensure near real-time data delivery and are glad to highlight that we have successfully demonstrated these imaging capabilities, the first of its kind in the world, across various platforms such as Drones as well as HAPS (High-Altitude Pseudo Satellites). Responsibilities Collaborate with the Bangalore team to accelerate the development of the product Install, configure, and execute packaged Python + React applications on an air-gapped system, isolated from the network Run automated & manual QC/benchmark suites on geospatial datasets, compare against target metrics, and document results Diagnose failures / performance gaps; tweak YAML/JSON configuration parameters, model checkpoints, and resource settings to hit accuracy & latency targets Generate concise diffs / patch files and reproducible reports for the backend team; participate in rapid iteration cycles Automate local environment set-up via scripts (e.g., PowerShell, Bash, Ansible) and maintain a minimal local PyPI / npm cache Pair with the Geospatial Analyst to validate outputs and capture edge- case feedback from real client data Requirements 2-3 yrs building and debugging full-stack apps (Python 3, FastAPI/Flask, React, Electron, Node) Solid grasp of packaging & deployment: PyInstaller/Briefcase, npm scripts, semantic versioning, checksum-based release manifests Comfort with offline dependency management (wheelhouses, npm offline cache) and Git-based change control Strong test mindset: pytest, Playwright, snapshot testing, CI pipelines (GitHub Actions, GitLab CI, or similar) Additional Skillset Any prior experience or exposure to image processing or computer vision Experience with Geospatial data formats (SAFE, CEOS, GeoTIFF) and GDAL/Rasterio Benefits Acquire valuable opportunities for learning and development through close collaboration with the founding team. Contribute to impactful projects and initiatives that drive meaningful change. We provide a competitive salary package that aligns with your expertise and experience. Enjoy comprehensive health benefits, including medical, dental, and vision coverage, ensuring the well-being of you and your family. Work in a dynamic and innovative environment alongside a dedicated and passionate team. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#5BBD6E;border-color:#5BBD6E;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> Show more Show less

Posted 3 days ago

Apply

13.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Experienced Enterprise Content Management (ECM) Systems Content & Document Lifecycle Management 13+ years of experience in Content Management, Document Management, Records Management, or Enterprise Content Management (ECM). Manage and maintain content and documents across various systems (e.g., SharePoint, Documentum, OpenText, internal repositories, Adobe Experience Manager (AEM)) ensuring accuracy, consistency, and compliance. Develop and enforce content classification schemas, metadata standards, and tagging conventions. Oversee document version control, access permissions, retention policies, and archival processes. Ensure all content and document management practices comply with internal policies, industry regulations, and legal requirements (e.g., data privacy, record-keeping). Contribute to the development and refinement of content governance frameworks. Conduct regular content audits to identify outdated, redundant, or inconsistent information. Engineer solutions to capture not only document content but also organizational and semantic context—ensuring each document is tagged, enriched, and classified for optimal downstream use. Implement context-preserving transformations, such as OCR, language detection, classification, and context-based metadata extraction, leveraging Azure Cognitive Services and custom AI models. Define strategies for automated metadata extraction, entity recognition, taxonomy management, and document context embedding (including vector-based semantic search). Implement auto-tagging, versioning, and lineage tracking to ensure every document’s journey—from ingestion to consumption—remains transparent and auditable. Champion the integration of advanced content embedding (e.g., knowledge graphs, vector databases) to enable intelligent, context-aware document retrieval and RAG (Retrieval Augmented Generation) solutions Educate and train users on best practices for content creation, organization, and AI-enabled tools. Knowledge of Headless CMS: Examples: Contentful, Strapi, Sanity, ButterCMS, Storyblok, Hygraph, Directus. Many traditional CMS like WordPress and Drupal now also offer "headless" options via APIs. AI Skills Demonstrated understanding and working knowledge of Artificial Intelligence (AI) and Machine Learning (ML) concepts, particularly as they apply to unstructured data (e.g., Natural Language Processing - NLP, intelligent document processing - IDP, text analytics, generative AI basics). This is not an AI development role, but a comprehension of capabilities and limitations is key. A genuine interest in how AI can transform information management. Team Leadership Skills Responsible for designing functional technology solutions, overseeing and reviewing development and implementation of solutions, and providing support to software development teams under supervision of Technical Lead and in close collaboration with Lead Engineers. Communication & Collaborative Skills Lead workshops and knowledge-sharing sessions to promote best practices in document enrichment and contextualization. Strong analytical and problem-solving abilities, with a keen eye for detail. Excellent communication and interpersonal skills, capable of explaining complex information clearly. Ability to work independently and collaboratively in a team-oriented environment. Proactive, organized, and capable of managing multiple priorities. Show more Show less

Posted 3 days ago

Apply

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Title: Full Stack AI Architect Exp: 10yrs+ Location: Hyderabad/Chennai Summary: We are seeking a highly skilled Full Stack AI Architect to join our team and work under the guidance of Principal Architect to develop and deploy LLM Agents & Multi-Agent Frameworks. The ideal candidate will have 10+ years of experience in software engineering, with a strong focus on AI/ML. This role requires excellent problem-solving capabilities and a strong execution mindset to drive the development and deployment of AI-powered solutions. Responsibilities Collaborate with the Principal Architect to design and implement AI agents and multi-agent frameworks. Develop and maintain robust, scalable, and maintainable microservices architectures. Ensure seamless integration of AI agents with core systems and databases. Develop APIs and SDKs for internal and external consumption. Work closely with data scientists to fine-tune and optimize LLMs for specific tasks and domains. Implement ML Ops practices, including CI/CD pipelines, model versioning, and experiment tracking1. Design and implement comprehensive monitoring and observability solutions to track model performance, identify anomalies, and ensure system stability2. Utilize containerization technologies such as Docker and Kubernetes for efficient deployment and scaling of applications3. Leverage cloud platforms such as AWS, Azure, or GCP for infrastructure and services3. Design and implement data pipelines for efficient data ingestion, transformation, and storage4. Ensure data quality and security throughout the data lifecycle5. Mentor junior engineers and foster a culture of innovation, collaboration, and continuous learning. Qualifications 10+ years of experience in software engineering with a strong focus on AI/ML. Proficiency in frontend frameworks like React, Angular, or Vue.js. Strong hands-on experience with backend technologies like Node.js, Python (with frameworks like Flask, Django, or FastAPI), or Java. Experience with cloud platforms such as AWS, Azure, or GCP. Proven ability to design and implement complex, scalable, and maintainable architectures. Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Passion for continuous learning and staying up to date with the latest advancements in AI/ML. End-to-end experience with at least one full AI stack on Azure, AWS, or GCP, including components such as Azure Machine Learning, AWS SageMaker, or Google AI Platform3. Hands-on experience with agent frameworks like Autogen, AWS Agent Framework, LangGraph etc. Experience with databases such as MongoDB, PostgreSQL, or similar technologies for efficient data management and integration. Illustrative Projects You May Have Worked On Successfully led the development and deployment of an AI-powered recommendation system using AWS SageMaker, integrating it with a Node.js backend and a React frontend. Designed and implemented a real-time fraud detection system on Azure, utilizing Azure Machine Learning for model training and Kubernetes for container orchestration. Developed a chatbot using Google AI Platform, integrating it with a Django backend and deploying it on GCP, ensuring seamless interaction with MongoDB for data storage Show more Show less

Posted 3 days ago

Apply

10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Description As a member of the software engineering division, you will take an active role in the definition and evolution of standard practices and procedures. Define specifications for significant new projects and specify, design and develop software according to those specifications. You will perform professional software development tasks associated with the developing, designing and debugging of software applications or operating systems. Provide leadership and expertise in the development of new products/services/processes, frequently operating at the leading edge of technology. Recommends and justifies major changes to existing products/services/processes. BS or MS degree or equivalent experience relevant to functional area. 10 or more years of software engineering or related experience. Career Level - IC5 Responsibilities Position Overview: We are seeking an experienced Senior Database/Application Developer to join our dynamic team. This role is focused on understanding customer requirements in the context of large-scale application development, API design, and data integration. The ideal candidate will have deep expertise in both databases and application development, with a strong understanding of the specific needs and expectations that customer developers have when it comes to APIs, database features, data ingestion, and integration with data lakes. You will be a key player in defining and creating the architectural components of our platform, ensuring that our APIs are well-aligned with the needs of developers, and guiding the development of robust, scalable solutions. Key Responsibilities: API Design & Development: Work closely with customers and internal teams to define, develop, and enhance APIs for accessing and managing databases. Ensure APIs are intuitive, efficient, and scalable. Database Architecture & Feature Exposure: Collaborate with database architects to identify key database features that should be exposed via APIs. Ensure that databases are optimized for performance and provide the necessary functionality for developers to build scalable applications. Data Ingestion: Design and implement strategies for efficient and high-performance data ingestion processes. Focus on ensuring smooth, scalable, and reliable data flows from a variety of sources into databases and data lakes. Integration with Data Lakes: Understand customer requirements for data lakes and assist in designing seamless integration between applications and large-scale data lakes, ensuring efficient querying and data storage solutions. Customer-Centric Solutions: Engage with customer developers to understand their needs and pain points. Translate these insights into actionable solutions that improve product usability and offer high-value features. Performance Optimization: Work with the team to ensure high-performance, low-latency database interactions and API calls. Troubleshoot and optimize application performance, especially under heavy load scenarios. Collaboration & Mentorship: Mentor junior developers, sharing knowledge around best practices in database architecture, API development, and data integrations. Collaborate cross-functionally with product management, engineering, and customer support teams to deliver the best developer experience. Required Skills & Qualifications: Experience: 10+ years of experience in database development, application development, and building large-scale systems. Database Expertise: Proficiency in relational databases and NoSQL databases . API Design & Development: Proven experience in designing RESTful APIs Strong understanding of API security, versioning, and documentation practices. Data Ingestion & ETL: Experience working with data ingestion pipelines, ETL processes, and integrating data from multiple sources into databases and data lakes. Data Lakes & Big Data Technologies: Hands-on experience with data lakes and big data processing frameworks Programming Skills: Proficiency in at least one modern programming language, such as Python, Java, Go, or Node.js. Customer-Focused: Strong understanding of customer needs, with the ability to translate business requirements into technical solutions that add value to developers. Communication: Excellent communication skills, with the ability to convey complex technical concepts clearly to both technical and non-technical stakeholders. Problem-Solving: Strong analytical and troubleshooting skills with a proactive approach to identifying and solving problems. Team Collaboration: Experience working in an agile development environment and collaborating with cross-functional teams. About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less

Posted 3 days ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Description Summary Designs, develops, tests, debugs and implements more complex operating systems components, software tools, and utilities with full competency. Coordinates with users to determine requirements. Reviews systems under development and related documentation. Makes more complex modifications to existing software to fit specialized needs and configurations, and maintains program libraries and technical documentation. May coordinate activities of the project team and assist in monitoring project schedules and costs. Essential Duties And Responsibilities Lead and Manage configuration, maintenance, and support of portfolio of AI models and related products. Manage model delivery to Production deployment team and coordinate model production deployments. Ability to analyze complex data requirements, understand exploratory data analysis, and design solutions that meet business needs. Work on analyzing data profiles, transformation, quality and security with the dev team to build and enhance data pipelines while maintaining proper quality and control around the data sets. Work closely with cross-functional teams, including business analysts, data engineers, and domain experts. Understand business requirements and translate them into technical solutions. Understand and review the business use cases for data pipelines for the Data Lake including ingestion, transformation and storing in the Lakehouse. Present architecture and solutions to executive-level. Minimum Qualifications Bachelor's or master's degree in computer science, Engineering, or related technical field Minimum of 5 years' experience in building data pipelines for both structured and unstructured data. At least 2 years' experience in Azure data pipeline development. Preferably 3 or more years' experience with Hadoop, Azure Databricks, Stream Analytics, Eventhub, Kafka, and Flink. Strong proficiency in Python and SQL Experience with big data technologies (Spark, Hadoop, Kafka) Familiarity with ML frameworks (TensorFlow, PyTorch, scikit-learn) Knowledge of model serving technologies (TensorFlow Serving, MLflow, KubeFlow) will be a plus Experience with one pof the cloud platforms (Azure preferred) and their Data Services. Understanding ML services will get preference. Understanding of containerization and orchestration (Docker, Kubernetes) Experience with data versioning and ML experiment tracking will be great addition Knowledge of distributed computing principles Familiarity with DevOps practices and CI/CD pipelines Preferred Qualifications Bachelor's degree in Computer Science or equivalent work experience. Experience with Agile/Scrum methodology. Experience with tax and accounting domain a plus. Azure Data Scientist certification a plus. Applicants may be required to appear onsite at a Wolters Kluwer office as part of the recruitment process. Show more Show less

Posted 3 days ago

Apply

Exploring Versioning Jobs in India

The versioning job market in India is currently thriving with numerous opportunities for skilled professionals. Versioning plays a crucial role in software development, ensuring that code changes are tracked, managed, and deployed efficiently. Job seekers in India looking to pursue a career in versioning can find a variety of roles across different industries.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Chennai
  5. Mumbai

Average Salary Range

The average salary range for versioning professionals in India varies based on experience levels. Entry-level positions can expect to earn between INR 3-5 lakhs per annum, while experienced professionals can command salaries ranging from INR 8-15 lakhs per annum.

Career Path

In the field of versioning, a typical career path may include roles such as: - Junior Developer - Developer - Senior Developer - Tech Lead - Architect

Related Skills

Apart from expertise in versioning tools like Git, professionals in this field may also be expected to have knowledge and experience in: - Continuous Integration/Continuous Deployment (CI/CD) - DevOps practices - Programming languages like Python, Java, or JavaScript - Cloud computing platforms like AWS or Azure

Interview Questions

  • What is version control and why is it important? (basic)
  • Can you explain the difference between Git and SVN? (medium)
  • How would you handle a merge conflict in Git? (medium)
  • What are some best practices for branching in Git? (medium)
  • How does Git differ from other version control systems? (advanced)
  • Explain the concept of rebasing in Git. (medium)
  • What is Git bisect and how is it used? (advanced)
  • How would you revert a commit that has already been pushed to a remote repository? (medium)
  • Describe the difference between a tag and a branch in Git. (basic)
  • What is a Git hook and how can it be useful? (medium)
  • Explain the concept of Git submodules. (advanced)
  • How can you squash multiple commits into a single commit in Git? (medium)
  • What is the purpose of the .gitignore file in a Git repository? (basic)
  • How can you view the commit history in Git? (basic)
  • What is a git stash and how is it used? (medium)
  • Describe the difference between a rebase and a merge in Git. (medium)
  • How do you resolve a conflict during a merge in Git? (medium)
  • Explain the concept of cherry-picking in Git. (advanced)
  • What is the purpose of the HEAD pointer in Git? (basic)
  • How do you undo the last commit in Git? (basic)
  • Describe the Git workflow you follow in your projects. (medium)
  • What is GitLab and how is it different from GitHub? (basic)
  • How do you create a new branch in Git? (basic)
  • What is the difference between Git pull and Git fetch? (medium)
  • Explain how Git rebase works. (advanced)

Closing Remark

As you navigate the versioning job market in India, remember to continuously upskill, practice your technical knowledge, and showcase your expertise confidently during interviews. With determination and preparation, you can excel in your versioning career and secure exciting opportunities in the industry. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies