Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
Syniverse is the world’s most connected company. Whether we’re developing the technology that enables intelligent cars to safely react to traffic changes or freeing travelers to explore by keeping their devices online wherever they go, we believe in leading the world forward. Which is why we work with some of the world’s most recognized brands. Eight of the top 10 banks. Four of the top 5 global technology companies. Over 900 communications providers. And how we’re able to provide our incredible talent with an innovative culture and great benefits. Who We're Looking For The Data Engineer I is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems or building new solutions from ground up. This role will work with developers, architects, product managers and data analysts on data initiatives and ensure optimal data delivery with good performance and uptime metrics. Your behaviors align strongly with our values because ours do. Some Of What You'll Do Scope of the Role: Direct Reports: This is an individual contributor role with no direct reports Key Responsibilities Create, enhance, and maintain optimal data pipeline architecture and implementations. Analyze data sets to meet functional / non-functional business requirements. Identify, design, and implement data process: automating processes, optimizing data delivery, etc. Build infrastructure and tools to increase data ETL velocity. Work with data and analytics experts to implement and enhance analytic product features. Provide life cycle support the Operations team for existing products, services, and functionality assigned to the Data Engineering team. Experience, Education, And Certifications Bachelor’s degree in Computer Science, Statistics, Informatics or related field or equivalent work experience. Software Development experience desired Experience in Data Engineer fields is desired. Experience in building and optimizing big data pipelines, architectures, and data sets: Experience with big data tools: Hadoop, Spark, Kafka, etc. Experience with relational SQL databases, such as PostgreSQL, MySQL, etc. Experience with stream-processing systems: Flink, KSQL, Spark-Streaming, etc. Experience with programming languages, such as Java, Scala, Python, etc. Experience with cloud data engineering and development, such as AWS, etc. Additional Requirements Familiar with Agile software design processes and methodologies. Good analytic skills related to working with structured and unstructured datasets. Knowledge of message queuing, stream processing and scalable big data stores. Ownership/accountability for tasks/projects with on time and quality deliveries. Good verbal and written communication skills. Teamwork with independent design and development habits. Work with a sense of urgency and positive attitude. Why You Should Join Us Join us as we write a new chapter, guided by world-class leadership. Come be a part of an exciting and growing organization where we offer a competitive total compensation, flexible/remote work and with a leadership team committed to fostering an inclusive, collaborative, and transparent organizational culture. At Syniverse connectedness is at the core of our business. We believe diversity, equity, and inclusion among our employees is crucial to our success as a global company as we seek to recruit, develop, and retain the most talented people who want to help us connect the world. Know someone at Syniverse? Be sure to have them submit you as a referral prior to applying for this position.
Posted 1 day ago
7.0 years
0 Lacs
Greater Kolkata Area
On-site
Who We Are Kontoor Brands, Inc. (KTB) is the parent company of Wrangler®, Lee® and Rock & Republic®, with owned manufacturing facilities in Mexico. Kontoor also owns and operates over 140 retail stores across the globe. Our global company employs more than 13,000 people in 65 countries, with world headquarters in Greensboro, North Carolina, and regional headquarters in Geneva and Hong Kong. Job Posting DUTIES AND RESPONSIBILITIES MUST Have Delinea Suite Of Products Experience. Lead and support the Privilege Access Management (PAM) program Support and expand the Privileged Access Management Platform platform Privilege Access Management reporting and metrics Work closely with the Identity Access Management team, Security Operations, Infrastructure, application owners, and product managers to help drive the identity strategy Work with vendors and third parties to evaluate new products, features, and solutions Ensure regulatory requirements and industry best practices are followed for Identity and Access Management Support large, cross functional, globally distributed, and complex projects Provide evidence for compliance activities On call support of the privileged access management tools Stay current on security technology and trends for identity and access management in manufacturing, ecommerce, and retail Work with operational product owners and various peer teams to integrate Secure Password Vaulting systems and related technologies with end client’s platforms to protect and manage the credentials of critical systems. (Privileged accounts used by applications) Write scripts or code that will help with customization of the PAM products. The scripting and coding will involve (but not limited to) shell scripting, java, .net. Perform product evaluation, testing and certification of such secure vaulting systems. This involves system architectural design and subject matter expertise Ensure that all PAM security products meet or exceed the internal and regulatory requirements. Produce documentation of processes and procedures for the usage of the product. Follow the Technology Development Life Cycle in the development of all security tools related to vaulting services. Ensure that all integration of functions and tools meet the end client’s standards. Define necessary system enhancements to deploy new products and process enhancements. Develop, align and maintain the vision, strategy and roadmap for privileged access management with KTB’s Business and Security objectives, along with industry and tech standards and best practices. Lead and support the design and build of PAM technical capabilities. Oversee the expansion and use to technology and processes. Develops business cases that drive the adoption of the tools by proving the benefits. This role will be responsible for preparing for the next stage of transformation for Privileged Access Management, focusing on overall risk reduction, operational efficiency, and usability utilizing automation, data analytics and increased monitoring capabilities Lead the development, implementation, and management of relevant metrics to measure the efficiency and effectiveness of the Privileged Access Management service. Builds capability to monitor automation performance by including bench-marking and tracking performance against service improvements. Work Experience Relevant experience in a Privilege Access Management Engineering Role Minimum of 2 full implementation Experience in implementations and configurations of IAM/PAM systems - 7+ years. Education And/Or Certification Requirements Bachelor’s in computer science or combination of relevant education, experience, and training LIST THE TOP FIVE SKILLS REQUIRED TO PERFORM THIS ROLE Experience in implementations and configurations of IAM/PAM systems. Password/Credential vaulting technologies. Required (expert level) : Privileged Access Management Platform suite of products PAS, PSM,CPM development Experience writing & managing code developed in any of the following languages Powershell, Beanshell, c#, or java Hands-on experience working across various cloud environments that include IaaS, PaaS, and SaaS service offerings Access Management products and solutions preferred: Active Directory Servers/IDM Technologies Overall experience in all aspects of IAM is strongly desirable (Saviynt) Good understanding of cloud-based platforms Why Kontoor Brands? At Kontoor, we offer a comprehensive benefit package to fit your lifestyle. Our benefits are crafted with the same care as our products. When our employees are healthy, secure, and well, they bring their best selves to work. Kontoor Brands supports you with a competitive benefits program that provides choice and flexibility to meet your and your family’s needs – now and in the future. We offer resources to support your physical, emotional, social, and financial wellbeing, plus benefits like discounts on our apparel. Kontoor Brands also provides four weeks of Paid Parental Leave to eligible employees who are new parents, Flexible Fridays, and Tuition Reimbursement. We are proud to offer a workplace culture centered on equitable opportunities and a sense of belonging for all team members. Here we have a global workforce of high-performing teams that both unlocks our individual uniqueness and harnesses our collaborative talents.
Posted 4 days ago
8.0 years
4 - 8 Lacs
Hyderābād
On-site
Ready to build the future with AI? At Genpact, we don’t just keep up with technology—we set the pace. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory , our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, innovation-driven environment, love building and deploying cutting-edge AI solutions, and want to push the boundaries of what’s possible, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . We are inviting applications for the role of Lead Consultant– Data Engineer! This role supports business enablement, which includes understanding of business trends, provide data driven solutions at scale. Hire will be responsible for developing, expanding and optimizing our data pipeline architecture, as well as optimizing data flow and collaboration from cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from ground up either in on-prem or in cloud (AWS/Azure). The data engineer will support our software developers, database architects, data analyst and data scientists on data initiatives and will ensure optimal data delivery architecture. Core data engineering work experience in Life Sciences/Healthcare/CPG for minimum 8+ years Work location: Bangalore Responsibilities Good years of professional experience in creating and maintaining optimal data pipeline architecture. Assemble large, complex data sets that meet functional/non-functional business requirements Experience working on warehousing systems, and an ability to contribute towards implanting end-to-end, loosely couple/decoupled technology solutions for data ingestion and processing, data storage, data access, and integration with business user centric analytics/business intelligence frameworks Advanced working SQL knowledge and experience working with relational databases, query authoring as well as working familiarity with a variety of databases A successful history of manipulating, processing and extracting value from large disconnected datasets Design, develop, and maintain scalable and resilient ETL/ELT pipelines for handling large volumes of complex data Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS/Azure big data toolsets Architecting and implementing data governance and security for data platforms on cloud Cloud certification will be an advantage but not a mandate for this role Experience in following software/ tools : Experience with relational SQL and NoSQL databases, including Postgres and MongoDB Experience with big data tools : Hadoop, Spark, Kafka, etc Experience with data pipeline and workflow management tools : Airflow, Luigi, etc Experience with AWS Cloud services or Azure cloud services Experience with scripting languages: Python or Java Understanding with stream-processing systems: Spark-Streaming, etc Strong project management and organizational skills Ability to comprehend business needs, convert them into BRD & TRD (Business/Technical requirement document), develop implementation roadmap and execute on time Effectively respond to requests for ad hoc analyses. Good verbal and written communication skills Ownership of tasks assigned without supervisory follow-up Proactive planner and can work independently to manage own responsibilities Personal drive and positive work ethic to deliver results within tight deadlines and in demanding situations Qualifications Minimum qualifications Master’s or bachelor’s in engineering - BE/B- Tech, BCA, MCA, BSc/MSc Master’s in science or related Why join Genpact? Lead AI-first transformation – Build and scale AI solutions that redefine industries Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career —Gain hands-on experience, world-class training, mentorship, and AI certifications to advance your skills Grow with the best – Learn from top engineers, data scientists, and AI experts in a dynamic, fast-moving workplace Committed to ethical AI – Work in an environment where governance, transparency, and security are at the core of everything we build Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the 140,000+ coders, tech shapers, and growth makers at Genpact and take your career in the only direction that matters: Up . Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Lead Consultant Primary Location India-Hyderabad Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jul 30, 2025, 1:44:51 AM Unposting Date Ongoing Master Skills List Digital Job Category Full Time
Posted 4 days ago
10.0 years
0 Lacs
India
Remote
Job Description EMPLOYMENT TYPE: Full-Time, Permanent LOCATION: Remote (Pan India) SHIFT TIMINGS: 2.00 pm-11:00 pm IST Budget- As per company standards REPORTING: This position will report to our CEO or any other Lead as assigned by Management. The Senior Data Engineer will be responsible for building and extending our data pipeline architecture, as well as optimizing data flow and collection for cross-functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys working with big data and building systems from the ground up. You will collaborate with our software engineers, database architects, data analysts, and data scientists to ensure our data delivery architecture is consistent throughout the platform. You must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives. What You’ll Be Doing: ● Design and build parts of our data pipeline architecture for extraction, transformation, and loading of data from a wide variety of data sources using the latest Big Data technologies. ● Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. ● Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. ● Work with machine learning, data, and analytics experts to drive innovation, accuracy and greater functionality in our data system. Qualifications: ● Bachelor's degree in Engineering, Computer Science, or relevant field. ● 10+ years of relevant and recent experience in a Data Engineer role. ● 5+ years recent experience with Apache Spark and solid understanding of the fundamentals. ● Deep understanding of Big Data concepts and distributed systems. ● Strong coding skills with Scala, Python, Java and/or other languages and the ability to quickly switch between them with ease. ● Advanced working SQL knowledge and experience working with a variety of relational databases such as Postgres and/or MySQL. ● Cloud Experience with DataBricks ● Experience working with data stored in many formats including Delta Tables, Parquet, CSV and JSON. ● Comfortable working in a linux shell environment and writing scripts as needed. ● Comfortable working in an Agile environment ● Machine Learning knowledge is a plus. ● Must be capable of working independently and delivering stable, efficient and reliable software. ● Excellent written and verbal communication skills in English. ● Experience supporting and working with cross-functional teams in a dynamic environment.
Posted 6 days ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Eucloid is looking for a senior Data Engineer with hands-on expertise in Databricks to join our Data Platform team supporting various business applications. The ideal candidate will support the development of data infrastructure on Databricks for our clients by participating in activities which may include starting from upstream and downstream technology selection to designing and building different components. Candidate will also be involved in projects like integrating data from various sources, managing big data pipelines that are easily accessible with optimized performance of the overall ecosystem. The ideal candidate is an experienced data wrangler who will support our software developers, database architects, and data analysts on business initiatives. You must be self-directed and comfortable supporting the data needs of cross-functional teams, systems, and technical solutions. Qualifications B.Tech/BS degree in Computer Science, Computer Engineering, Statistics, or other Engineering disciplines Min. 5 years of Professional work Experience, with 1+ years of hands-on experience with Databricks Highly proficient in SQL & Data model (conceptual and logical) concepts Highly proficient with Python & Spark (3+ year) Knowledge of distributed computing and cloud databases like Redshift, BigQuery etc. 2+ years of Hands-on experience with one of the top cloud platforms - AWS/GCP/Azure. Experience with Modern Data stack tools like Airflow, Terraform, dbt, Glue, Dataproc, etc. Exposure to Hadoop & Shell scripting is a plus Min 2 years, Databricks 1 year desirable, Python & Spark 1+ years, SQL only, any cloud exp 1+ year Responsibilities Design, implementation, and improvement of processes & automation of Data infrastructure Tuning of Data pipelines for reliability & performance Building tools and scripts to develop, monitor, and troubleshoot ETLs Perform scalability, latency, and availability tests on a regular basis. Perform code reviews and QA data imported by various processes. Investigate, analyze, correct, and document reported data defects. Create and maintain technical specification documentation. (ref:hirist.tech)
Posted 1 week ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description : is looking for a senior Data Engineer with hands-on expertise in Databricks to join our Data Platform team supporting various business applications. The ideal candidate will support the development of data infrastructure on Databricks for our clients by participating in activities which may include starting from upstream and downstream technology selection to designing and building different components. Candidate will also involve in projects like integrating data from various sources, managing big data pipelines that are easily accessible with optimized performance of the overall ecosystem. The ideal candidate is an experienced data wrangler who will support our software developers, database architects, and data analysts on business initiatives. You must be self-directed and comfortable supporting the data needs of cross-functional teams, systems, and technical solutions. Qualifications B.Tech/BS degree in Computer Science, Computer Engineering, Statistics, or other Engineering disciplines Min. 5 years of Professional work Experience, with 1+ years of hands-on experience with Databricks Highly proficient in SQL & Data model (conceptual and logical) concepts Highly proficient with Python & Spark (3+ year) Knowledge of distributed computing and cloud databases like Redshift, Big query etc. 2+ years of Hands-on experience with one of the top cloud platforms - AWS/GCP/Azure. Experience with Modern Data stack tools like Airflow, Terraform, dbt, Glue, Dataproc etc. Exposure to Hadoop & Shell scripting is a plus Min 2 years, Databricks 1 year desirable, Python & Spark 1+ years, SQL, any cloud exp 1+ year Responsibilities Design, implementation, and improvement of processes & automation of the Data infrastructure Tuning of Data pipelines for reliability & performance Building tools and scripts to develop, monitor, and troubleshoot ETLs Perform scalability, latency, and availability tests on a regular basis. Perform code reviews and QA data imported by various processes. Investigate, analyze, correct, and document reported data defects. Create and maintain technical specification documentation. (ref:hirist.tech)
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Overview We’re seeking a proactive DevOps Engineer to streamline our infrastructure and delivery processes across Azure, on-premises, and Kubernetes (Wrangler) environments. You’ll be responsible for automating CI/CD pipelines, implementing infrastructure as code, improving security and observability, and supporting production deployments. Responsibilities Key Responsibilities CI/CD Management: Build and maintain CI/CD pipelines for Azure, on-prem, and Kubernetes clusters. Implement gated deployments and troubleshoot pipeline failures. Git Strategy & Governance: Enforce PR reviews, CI checks, and branching strategies. Maintain repo hygiene and branch protection policies. Security & Quality: Integrate tools like Snyk and SonarQube to identify and resolve vulnerabilities and code quality issues. Infrastructure as Code: Provision and manage Azure infrastructure using Terraform. Maintain reusable modules and automate resource lifecycles. Monitoring & Cost Optimization: Use Azure Monitor, Prometheus, and Grafana to build dashboards, set up alerts, and identify cost savings. Access Management: Define Azure RBAC policies. Enforce least-privilege access and regularly audit sensitive environment permissions. Incident Response & Support: Provide deployment support during off-hours, resolve infra-related issues, and assist app teams with CI/CD workflows. Automation: Identify and automate manual tasks across environments. Develop custom monitoring and alerting where needed. Disaster Recovery: Implement and test DR plans for critical systems to ensure recovery readiness. Qualifications Required Skills & Experience Hands-on experience with Azure DevOps, Git, YAML pipelines Proficiency in Terraform for infrastructure provisioning Strong understanding of Azure services, especially identity, monitoring, compute, and networking Familiarity with Kubernetes (preferably Rancher/Wrangler-based) Experience integrating security tools (Snyk, SonarQube) Experience with monitoring/observability tools (Grafana, Prometheus, Azure Monitor) Solid scripting skills (e.g., Bash, PowerShell, Python) Nice to Have Exposure to hybrid cloud or on-premise deployments Experience supporting production workloads and deployments in enterprise environments Knowledge of cost optimization best practices in Azure
Posted 1 week ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Overview We are looking for a savvy Data Engineer to manage in-progress and upcoming data infrastructure projects. The candidate will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder using Python and data wrangler who enjoys optimizing data systems and building them from the ground up. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. Responsibilities for Data Engineer Create and maintain optimal data pipeline architecture, assemble large, complex data sets that meet functional / non-functional business requirements using Python and SQL / AWS / Snowflakes. Identify, design, and implement internal process improvements through: automating manual processes using Python, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL / AWS / Snowflakes technologies. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Keep our data separated and secure across national boundaries through multiple data centers and AWS regions. Work with data and analytics experts to strive for greater functionality in our data systems. Qualifications for Data Engineer Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Strong analytic skills related to working with unstructured datasets. Build processes supporting data transformation, data structures, metadata, dependency and workload management. A successful history of manipulating, processing and extracting value from large disconnected datasets. Desired Skillset:- 3+ years of experience in a Python Scripting and Data specific role, with Bachelor degree. Experience with data processing and cleaning libraries e.g. Pandas, numpy, etc., web scraping/ web crawling for automation of processes, API’s and how they work. Debugging code if it fails and find the solution. Should have basic knowledge of SQL server job activity monitoring and of Snowflake. Experience with relational SQL and NoSQL databases, including PostgreSQL and Cassandra. Experience with most or all the following cloud services: AWS, Azure, Snowflake, Google Strong project management and organizational skills Experience supporting and working with cross-functional teams in a dynamic environment.
Posted 1 week ago
0 years
0 Lacs
Greater Kolkata Area
Remote
Who We Are Kontoor Brands, Inc. (KTB) is the parent company of Wrangler®, Lee® and Rock & Republic®, with owned manufacturing facilities in Mexico. Kontoor also owns and operates over 140 retail stores across the globe. Our global company employs more than 13,000 people in 65 countries, with world headquarters in Greensboro, North Carolina, and regional headquarters in Geneva and Hong Kong. Job Posting Position is based on Remote, India Duties And Responsibilities Continuous monitoring of critical system access Work with the functional, development, and technical teams to ensure requirements are understood and have all possible details captured to develop the solution for application security Perform regular health checks to detect deviations of established procedures, role mapping, unauthorized system activity, and report findings Ensure that changes to roles and system are tested, approved, and completed according to regulatory and compliance requirements Support identifying risks and designing the SOD (Segregation of Duties) Matrix Provide support for users with security-related problems and assist functional and technical teams with troubleshooting critical issues, as it relates to security roles Support of program audit activities Design and implement continuous monitoring controls Work closely with IT Security team Administer solution that facilitate user provisioning/de-provisioning, authentication/authorization and reporting based on business needs, industry best practices, and audit/regulatory requirements by working with functional team and business role owners Identify and implement continuous improvement opportunities to drive process efficiencies applying conceptual knowledge and technology to solve sophisticated business processes and procedural problems Resolve customer complaints/technical issues in collaboration with support team and responds to suggestions for improvements and enhancements. Perform hands-on technical configuration of security on SAP applications when required, for example in high risk or highly sophisticated enhancements. Assist in management of technical changes through the landscape, responsibility for quality and assurance that control points are satisfied. Working Experience Experience in SAP Security projects with at least 3 full cycle implementations & Experience in SAP GRC Access Controls configuration and support Education And/Or Certification Requirements Bachelor’s in computer science or combination of relevant education, experience, and training. LIST THE TOP FIVE SKILLS REQUIRED TO PERFORM THIS ROLE. Hands-on SAP Security support and configuration experience. An understanding of SAP Authorization concepts in an Enterprise environment (Single/Composite roles and role derivation) SAP security, GRC Technical skills, covering the main functional areas and Basis components. Experience in developing, administering, and monitoring the GRC ruleset Adept at analyzing SoD risks and reviewing user’s IDs/roles with respect to SoD resolutions Proficient in identifying and analyzing mitigating controls for SoD conflicts Assist in management of technical changes through the landscape, responsibility for quality and assurance that control points are satisfied. An understanding of key business process risks. Awareness of Information Security principles. Why Kontoor Brands? At Kontoor, we offer a comprehensive benefit package to fit your lifestyle. Our benefits are crafted with the same care as our products. When our employees are healthy, secure, and well, they bring their best selves to work. Kontoor Brands supports you with a competitive benefits program that provides choice and flexibility to meet your and your family’s needs – now and in the future. We offer resources to support your physical, emotional, social, and financial wellbeing, plus benefits like discounts on our apparel. Kontoor Brands also provides four weeks of Paid Parental Leave to eligible employees who are new parents, Flexible Fridays, and Tuition Reimbursement. We are proud to offer a workplace culture centered on equitable opportunities and a sense of belonging for all team members. Here we have a global workforce of high-performing teams that both unlocks our individual uniqueness and harnesses our collaborative talents.
Posted 1 week ago
0.0 - 31.0 years
1 - 3 Lacs
Secunderabad
On-site
Now Hiring: Partner of Operations @ Mantittude Location: East Maredpally Type : Fixed Contract for 6 months - renewable Experience Level: 2–5 Years Type: Full-Time | High Ownership | Startup Culture Industry: D2C | Men’s Skincare | E-commerce ________________________________________ 🧴 About Mantittude We’re not just another skincare brand. Mantittude is on a mission to redefine grooming for the modern Indian man — with powerful formulations, bold branding, and zero fluff. We're a fast-growing, performance-driven D2C startup that believes men should own their skin, their time, and their game. Now, we’re looking for someone who can own our backend operations like a boss — someone who thrives in organized chaos and doesn’t wait for SOPs to get things moving. ________________________________________ 🎯 The Role: Operations Executive This isn’t a desk-bound job where you just move paper. You’ll be our ops ninja, data wrangler, logistics whisperer, and vendor troubleshooter — all rolled into one. You’ll build systems, not just manage them. You’ll solve, not escalate. You’ll own, not outsource. ________________________________________ 🔑 Key Responsibilities 📊 Data Management & Dashboard Automation • Dive into platforms like Shopify, Seller Central, Shiprocket, and more • Identify the metrics that matter to drive business performance • Use tools like Google Sheets, Looker Studio, and other lean solutions to create real-time dashboards • Build systems to track, analyze, and act — not just report 🚚 End-to-End Supply Chain Management • Ensure timely and accurate dispatch of customer & marketplace orders • Coordinate with 3PL partners & onboard new logistics providers • Manage inventory across warehouses, marketplaces, and internal systems • Handle returns and operations on Amazon, Flipkart, and D2C • Be the go-to person for production timelines and vendor coordination 💡 Proactive Ops & Vendor Management • Identify and onboard the right vendors for packaging, logistics, ingredients, and more • Negotiate and optimize procurement and cost structures • Run primary/secondary research to identify raw materials, innovations, and partnerships • Liaise with consultants and freelancers for legal, admin, and compliance matters • Stay scrappy, stay alert, and always keep moving ________________________________________ 👀 What We’re Looking For • 2–5 years of hands-on operations experience (D2C / FMCG / eComm preferred) • Fluent in Excel/Google Sheets, and capable of building dashboards • Someone who can hustle like a founder and execute like a pro • A street-smart doer with strong communication and vendor management skills • Bonus: Experience with Shopify, Amazon Seller Central, Flipkart Seller Hub, Shiprocket ________________________________________ 🙌 What You’ll Get • High-impact role with end-to-end ownership • The chance to work with a bold, purpose-driven brand • Flexible, agile, and non-corporate culture • Opportunity to grow as fast as you want to • Real learning. Real responsibility. Real growth. ________________________________________ 👉 Ready to be the engine behind India’s boldest men’s skincare brand?
Posted 2 weeks ago
7.0 years
0 Lacs
India
Remote
Company: Numeric Technologies Company Website: https://numerictech.com Type: Permanent with Numeric Technologies Numeric, incorporated in 1996 is a worldwide Business & Information Technology, Consultingand Services company. Headquartered in Chicago, IL with additional offices in Miami, Silicon Valley, Luxembourg, UK as well as delivery centers in India (Bengaluru,Hyderabad,Chennai) to serve our customers on offshore needs.We at Numeric Technologies pride ourselves for providing our customers with the best services and solutions. We believe that the right people in the right time and the right position are the key to our company’s improvement; we continue to endeavor and develop our company through our corporate culture and values. Position : Data Engineer Experience : 7+ Years Location : BLR/Remote Shift: General shift Data Engineer will join the team to expand and optimize our data and data pipeline architecture and optimizing data flow and collection for cross-functional teams. Should be an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. T he Data Engineer will support Software Developers, Data Quality Engineers, Data Analysts and Data Scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. Responsibilities Create and maintain optimal data pipeline architecture. Assemble complex data sets that meet functional / non-functional requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, Dbt and AWS 'big data' technologies. Build analytics tools that utilize the data pipeline to provide actionable insights into employee experience, operational efficiency, and other key business performance metrics. Work with stakeholders to assist with data-related technical issues and support associated data infrastructure needs. Build processes supporting data transformation, data structures, metadata, dependency and workload management. Keep up to date with the latest and greatest in feature-sets and capabilities from public cloud providers (such as AWS and Azure) and find ways to apply them back help their team Work with data scientists and analysts to strive for greater functionality in our data systems. Minimum Qualifications We are looking for a candidate with 5+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. 5+ years of hands-on experience in Snowflake 5+ years of working in dbt with knowledge on advanced dbt concepts like macros and Jinja templating. Advanced working SQL experience working with relational databases, query authoring (SQL) and working familiarity with a variety of databases. Experience with scripting languages such as Python Experience with big data tools such as PySpark Experience with AWS cloud services used often for data engineering including S3, EC2, Glue, Lambda, RDS, or Redshift Experience working with APIs to pull and push data. Experience optimizing 'big data' data pipelines, architectures and data sets. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Preferred Qualification Experience working with AWS CloudFormation templates is a plus Familiarity with Agile and SCRUM methodologies is a plus Experience working with PowerBI to develop dashboards is a plus Analytical skills related to working with unstructured datasets . A successful history of processing value from large, disconnected datasets. Experience working with agile, globally distributed teams
Posted 2 weeks ago
0.0 - 3.0 years
0 Lacs
Karnataka
On-site
Location Karnataka Bengaluru Experience Range 7 - 15 Years Job Description Spark/Scala Job Description As a Software Development Engineer 2 you will be responsible for expanding and optimising our data and data pipeline architecture as well as optimising data flow and collection for cross-functional teams. The ideal candidate is an experienced data pipeline design and data wrangler who enjoys optimising data systems and building them from the ground up. The Data Engineer will lead our software developers on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams systems and products. The right candidate will be excited by the prospect of optimising or even re-designing our company’s data architecture to support our next generation of products and data initiatives. Responsibilities Create and maintain optimal data pipeline architecture Assemble large complex data sets that meet functional / non-functional business requirements. Identify design and implement internal process improvements: automating manual processes optimising data delivery, coordinating to re-design infrastructure for greater scalability etc. Work with stakeholders including the Executive Product Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Keep our data separated and secure Work with data and analytics experts to strive for greater functionality in our data systems. Support PROD systems Qualifications Must have About 5 - 11 years and at least 3 years relevant experience with Bigdata. Must have Experience in building highly scalable business applications, which involve implementing large complex business flows and dealing with huge amount of data. Must have experience in Hadoop, Hive, Spark with Scala with good experience in performance tuning and debugging issues. Good to have any stream processing Spark/Java Kafka. Must have experience in design and development of Big data projects. Good knowledge in Functional programming and OOP concepts, SOLID principles, design patterns for developing scalable applications. Familiarity with build tools like Maven. Must have experience with any RDBMS and at least one NoSQL database preferably PostgresSQL Must have experience writing unit and integration tests using scaliest Must have experience using any versioning control system - Git Must have experience with CI / CD pipeline – Jenkins is a plus Basic hands-on experience in one of the cloud provider (AWS/Azure) is a plus Databricks Spark certification is a plus.
Posted 2 weeks ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Purpose We are looking for a Senior SQL Developer to join our growing team of BI & analytics experts. The hire will be responsible for expanding and optimizing our data and data queries, as well as optimizing data flow and collection for consumption by our BI & Analytics platform. The ideal candidate is an experienced data querying builder and data wrangler who enjoys optimizing data systems and building them from the ground up. The SQL Developer will support our software developers, database architects, data analysts and data scientists on data and product initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. The hire must be self-directed and comfortable supporting the data needs of multiple systems and products. The right candidate will be excited by the prospect of optimizing our company’s data architecture to support our next generation of products and data initiatives. Job Responsibilities Essential Functions: Requirements Create and maintain optimal SQL queries, Views, Tables, Stored Procedures. Work together with various business units (BI, Product, Reporting) to develop data warehouse platform vision, strategy, and roadmap. Understand the development of physical and logical data models. Ensure high-performance access to diverse data sources. Encourage the adoption of an organization’s frameworks by providing documentation, sample code, and developer support. Communicate progress on the adoption and effectiveness of the developed frameworks to department head and managers. Required Education And Experience Bachelor’s or Master’s degree or equivalent combination of education and experience in relevant field. Understanding of T-SQL, Data Warehouses, Star Schema, Data Modeling, OLAP, SQL and ETL Experiencing in Creating Table, Views, Stored Procedures. Understanding of several BI and Reporting Platforms, and be aware of industry trends and direction in BI/reporting and applicability to the organization’s product strategies. Skilled in multiple database platforms, including SQL Server and MySQL. Knowledgeable of Source Control and Project Management tools like Azure DevOps, Git, and JIRA Familiarity of using SonarQube for clean coding T-SQL practices. Familiarity with DevOps best practices and automation of documentation, testing, build, deployment, configuration, and monitoring Communication skills: It is vital that applicants have exceptional written and spoken communication skills with active listening abilities to contribute in making strategic decisions and advise senior management on specialized technical issues, which will have an impact on the business Strong team building skills: it is crucial that they also have team building ability to provide direction for complex projects, mentor junior team members, and communicate the organization’s preferred technologies and frameworks across development teams. Experience: A candidate for this position must have had at least 5+ years working in a data warehousing position within a fast-paced and complex business environment, working as a SQL Developer. The candidate must also have had experience developing schema data models in a data warehouse environment. The candidate will also have had experience with full implementation of system development lifecycle (SDLC). The candidate must also have a proven and successful experience working with concepts of data integration, consolidation, enrichment, and aggregation. A suitable candidate will also have a strong demonstrated understanding of dimensional modeling and similar data warehousing techniques as well as having experience working with relational or multi-dimensional databases and business intelligence architectures. Analytical Skills: As expected, a candidate for the position will have passion as well as skill in research and analytics as well as a passion for data management tools and technologies. The candidate must have an ability to perform detailed data analysis, for example, in determining the content, structure, and quality of data through the examination of data samples and source systems. The hire will additionally have the ability to troubleshoot data warehousing issues and quickly resolve them. Expected Competencies Detailed oriented with strong organizational skills Ability to pay attention to programming style and neatness Strong English communication skills, both written and verbal Ability to train, mentor junior colleagues with patience with tangible results Work Timings This is a full-time position. Days and hours of work are Monday through Friday, and should be flexible to support different time zones ranging between 12 PM IST to 9PM IST, Work schedule may include evening hours or weekends due to client needs per manager instructions This role will be working in Hybrid Mode and will require at least 2 days’ work from office at Hyderabad. Occasional evening and weekend work may be expected in case of job-related emergencies or client needs. EEO Statement Cendyn provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability or genetics. In addition to federal law requirements, Cendyn complies with applicable state and local laws governing nondiscrimination in employment in every location in which the company has facilities. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training. Cendyn expressly prohibits any form of workplace harassment based on race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, genetic information, disability, or veteran status. Improper interference with the ability of Cendyn’s employees to perform their job duties may result in discipline up to and including discharge. Other Duties Please note this job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities and activities may change at any time with or without notice.
Posted 2 weeks ago
15.0 years
0 Lacs
Greater Kolkata Area
Remote
Who We Are Kontoor Brands, Inc. (KTB) is the parent company of Wrangler®, Lee® and Rock & Republic®, with owned manufacturing facilities in Mexico. Kontoor also owns and operates over 140 retail stores across the globe. Our global company employs more than 13,000 people in 65 countries, with world headquarters in Greensboro, North Carolina, and regional headquarters in Geneva and Hong Kong. Job Posting Position will be based on remote, India Job Description Engineering and Operations: System Monitoring & Performance Optimization: Ensuring smooth operations, continuous monitoring, and optimization of SAP environments. Incident Management & Troubleshooting: Quickly addressing and resolving technical issues across SAP systems and databases. Configuration & Maintenance: Overseeing the configuration of SAP Solution Manager, TMS, and SAP parameters. Database Management: Managing SAP HANA databases, applying patches and upgrades, ensuring backups, and optimizing performance. Cloud and Integration Management: Ensuring seamless integration of on-premise systems with SaaS and cloud platforms like Azure. SAP Vertex Integration: Supporting the configuration, maintenance, and integration of SAP Vertex for tax calculations, reporting, and compliance. SAP PI/PO Configuration & Management: Overseeing SAP PI/PO integrations, ensuring efficient data exchange, and troubleshooting integration issues between SAP and non-SAP systems. Shift Work: Flexibility to work on shifts based on the operational needs. Require participating in some On-call support and/or Follow-the-Sun Support model Project Execution And Delivery Participate in project technical design sessions analyzing business and technical infrastructure & cloud requirements, processes, integrate solutions of new and existing infrastructure and continuous improvement of standards, processes, procedure across the function area of responsibility related to Global SAP environment. Peer Review, validate, develop technical designs, prototypes, process designs, testing, training, and definition of support procedures for systems deployment in Cloud/On-Prem infrastructure(s) Develop and maintain appropriate detailed system documentation for global SAP environment, instances, interfaces, platforms, and relevant solutions – including architecture, design, integration flows, implementation, standard operating procedures, and testing lifecycle activities. As required, prepare formal project documentation, including specifications, requirements summaries, high-level design, low-level logical system design documents, state diagrams, automation integration flow, and test scripts. Internal Stakeholder and Vendor Management: Build, develop, and grow business relationships with users within Kontoor Global IT Infrastructure/Network/Application/Security and with key partners and vendors. This includes working with internal and external Infrastructure, application, security, and Networking teams to maintain and advance support for both current and future SAP landscapes. Provide regular communication to Kontoor IT management and leadership team on internal team initiatives, global IT Automation journey goals/objectives, Infrastructure/automation related projects, and other work assignments. Typical Requirements Bachelor’s degree in computer science, Software Development/Engineering, Infrastructure Technology Automation/Engineering, Industrial Engineering, Management Information Systems, or related field preferred, or a 15+ years combination of equivalent education and relevant experience Overall SAP Experience: 5+ years of total experience in SAP Basis and related technologies, with a deep understanding of SAP landscapes and architecture. Extensive experience working on large-scale SAP projects, including SAP S/4HANA, SAP BTP Cloud, SAP CAR, SAP Fiori, SAP SLT, SAP GRC, and Solution Manager implementations and support. SAP Technical Expertise: SAP S/4HANA: hands-on experience in SAP S/4HANA. SAP BTP Cloud: Proficiency in SAP Business Technology Platform, including cloud integrations. SAP CAR (Customer Activity Repository): Experience in implementing and maintaining SAP CAR. SAP Fiori: Expertise in Fiori app implementation, configuration, and troubleshooting. SAP SLT (SAP Landscape Transformation): Knowledge of real-time data replication and integration using SAP SLT. SAP GRC (Governance, Risk, and Compliance): Experience in implementing and managing SAP GRC systems. SAP Solution Manager: Experience in configuring and maintaining SAP Solution Manager, including basic and advanced configurations. SAP Vertex: Experience in integrating and maintaining SAP Vertex (Tax automation solution) with SAP ERP and S/4HANA environments. SAP PI/PO (Process Integration/Process Orchestration): Expertise in configuring, managing, and troubleshooting SAP PI/PO for seamless integration between various SAP and non-SAP systems. System Administration And Monitoring Operating System and Database Management: Experience with both SUSE Linux and Windows operating systems. Expertise in SAP/DB backup, restore, and recovery, server monitoring, performance optimization, and troubleshooting issues from the OS level. SAP HANA Database: Deep knowledge of SAP HANA's core technology, architecture, landscape design, installation, upgrades, and performance tuning. SAP ERP Systems: Administration and upgrade of SAP ERP systems, with specific experience in handling SAP ERP installations and MS SQL databases. System Copy and Transport Management: Handling system copy tasks, managing transports, TMS configuration, and troubleshooting transport errors. Performance Optimization & Troubleshooting: SQL Performance Analysis: Expertise in using SQL traces for performance optimization and impact analysis. Incident Management: Troubleshooting issues, including monitoring work processes, analyzing logs, and applying solutions based on OSS notes. Root Cause Analysis (RCA): Ability to perform in-depth analysis and identify the root causes of performance issues or incidents. SAP System Maintenance: Add-ons and Support Packages: Installation, upgrade, and troubleshooting of SAP add-ons and support packages. Kernel Upgrades and Client Copies: Experience with SAP kernel upgrades and client copy/export-import tasks. Spool Administration: Troubleshooting spool and printer-related issues. Cloud & Integration Knowledge: SaaS Products & Integrations: Familiarity with SaaS products and integrations, particularly with SAP solutions. Cloud Environments: Experience working with Azure and managing cloud-based systems, including integration with on-premise SAP systems. SAP PI/PO Integration: Experience with SAP PI/PO for seamless integration of SAP systems with other enterprise applications, handling data transformations, routing, and monitoring integration scenarios. Backup/Recovery & Disaster Recovery Strategy: Experience in disaster recovery (DR) strategy, backup/recovery processes, and maintaining system availability. Why Kontoor Brands? At Kontoor, we offer a comprehensive benefit package to fit your lifestyle. Our benefits are crafted with the same care as our products. When our employees are healthy, secure, and well, they bring their best selves to work. Kontoor Brands supports you with a competitive benefits program that provides choice and flexibility to meet your and your family’s needs – now and in the future. We offer resources to support your physical, emotional, social, and financial wellbeing, plus benefits like discounts on our apparel. Kontoor Brands also provides four weeks of Paid Parental Leave to eligible employees who are new parents, Flexible Fridays, and Tuition Reimbursement. We are proud to offer a workplace culture centered on equitable opportunities and a sense of belonging for all team members. Here we have a global workforce of high-performing teams that both unlocks our individual uniqueness and harnesses our collaborative talents.
Posted 2 weeks ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Syniverse is the world’s most connected company. Whether we’re developing the technology that enables intelligent cars to safely react to traffic changes or freeing travelers to explore by keeping their devices online wherever they go, we believe in leading the world forward. Which is why we work with some of the world’s most recognized brands. Eight of the top 10 banks. Four of the top 5 global technology companies. Over 900 communications providers. And how we’re able to provide our incredible talent with an innovative culture and great benefits. Who We're Looking For The Sr Data Engineer is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems or building new solutions from ground up. This role will work with developers, architects, product managers and data analysts on data initiatives and ensure optimal data delivery with good performance and uptime metrics. Your behaviors align strongly with our values because ours do. Some Of What You'll Do Scope of the Role: Direct Reports: This is an individual contributor role with no direct reports Key Responsibilities Create, enhance, and maintain optimal data pipeline architecture and implementations. Analyze data sets to meet functional / non-functional business requirements. Identify, design, and implement data process: automating processes, optimizing data delivery, etc. Build infrastructure and tools to increase data ETL velocity. Work with data and analytics experts to implement and enhance analytic product features. Provide life cycle support the Operations team for existing products, services, and functionality assigned to the Data Engineering team. Experience, Education, And Certifications Bachelor’s degree in Computer Science, Statistics, Informatics or related field or equivalent work experience. 5+ years of Software Development experience, including 3+ years of experience in Data Engineer fields. Experience in building and optimizing big data pipelines, architectures, and data sets: Experience with big data tools: Hadoop, Spark, Kafka, etc. Experience with relational SQL databases, such as PostgreSQL, MySQL, etc. Experience with stream-processing systems: Flink, KSQL, Spark-Streaming, etc. Experience with programming languages, such as Java, Scala, Python, etc. Experience with cloud data engineering and development, such as AWS, etc. Additional Requirements Familiar with Agile software design processes and methodologies. Good analytic skills related to working with structured and unstructured datasets. Knowledge of message queuing, stream processing and scalable big data stores. Ownership/accountability for tasks/projects with on time and quality deliveries. Good verbal and written communication skills. Teamwork with independent design and development habits. Work with a sense of urgency and positive attitude. Why You Should Join Us Join us as we write a new chapter, guided by world-class leadership. Come be a part of an exciting and growing organization where we offer a competitive total compensation, flexible/remote work and with a leadership team committed to fostering an inclusive, collaborative, and transparent organizational culture. At Syniverse connectedness is at the core of our business. We believe diversity, equity, and inclusion among our employees is crucial to our success as a global company as we seek to recruit, develop, and retain the most talented people who want to help us connect the world. Know someone at Syniverse? Be sure to have them submit you as a referral prior to applying for this position.
Posted 3 weeks ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
What You'll Do We are looking for a Tax Analyst to join our Wrangler team. The Wrangler team is an interdepartmental collaboration of the support and compliance departments. Our goal is to benefit both departments by providing increased understandings of compliance and support processes to analysts and product specialists. This is a great opportunity for a candidate to work in an innovative environment and grow within our business. You will report into the Lead/Manager - Tax & Compliance (Pune) What Your Responsibilities Will Be You will be responsible for prepping sales/use tax returns, validation of new sales/use tax returns customers, and Support/Compliance case management. Preparation, review, and filing of multi-jurisdictional sales/use, business and occupation, and gross receipts tax returns for monthly, quarterly, semi-annual, and annual filings. Review and validate jurisdictional returns set up on behalf of our customers. Perform timely analysis and case manager to resolve Support/Compliance customer inquiries. You need to be an effective communicator with customers, both in written and verbal form, clearly and precisely. Work collaboratively with team to train new team members on compliance processes. Document, organize, and maintain team training materials. You will have to be comfortable working in swing shifts (2pm to 11 pm or 3 pm to 12 am or 4pm to 1 am) What You’ll Need To Be Successful You need to have BCom/MCom/MBA or equivalent experience. 3+ years of related experience with tax or customer service. Knowledgeable in Excel (can perform complex functions), Word, and Outlook, Understand of Indirect Taxation. (SUT - Sales & Use Tax) Proficient in basic math, including percentages Able to use different software applications and tools How We’ll Take Care Of You Total Rewards In addition to a great compensation package, paid time off, and paid parental leave, many Avalara employees are eligible for bonuses. Health & Wellness Benefits vary by location but generally include private medical, life, and disability insurance. Inclusive culture and diversity Avalara strongly supports diversity, equity, and inclusion, and is committed to integrating them into our business practices and our organizational culture. We also have a total of 8 employee-run resource groups, each with senior leadership and exec sponsorship. What You Need To Know About Avalara We’re defining the relationship between tax and tech. We’ve already built an industry-leading cloud compliance platform, processing over 54 billion customer API calls and over 6.6 million tax returns a year. Our growth is real - we're a billion dollar business - and we’re not slowing down until we’ve achieved our mission - to be part of every transaction in the world. We’re bright, innovative, and disruptive, like the orange we love to wear. It captures our quirky spirit and optimistic mindset. It shows off the culture we’ve designed, that empowers our people to win. We’ve been different from day one. Join us, and your career will be too. We’re An Equal Opportunity Employer Supporting diversity and inclusion is a cornerstone of our company — we don’t want people to fit into our culture, but to enrich it. All qualified candidates will receive consideration for employment without regard to race, color, creed, religion, age, gender, national orientation, disability, sexual orientation, US Veteran status, or any other factor protected by law. If you require any reasonable adjustments during the recruitment process, please let us know.
Posted 3 weeks ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Syniverse is the world’s most connected company. Whether we’re developing the technology that enables intelligent cars to safely react to traffic changes or freeing travelers to explore by keeping their devices online wherever they go, we believe in leading the world forward. Which is why we work with some of the world’s most recognized brands. Eight of the top 10 banks. Four of the top 5 global technology companies. Over 900 communications providers. And how we’re able to provide our incredible talent with an innovative culture and great benefits. Who We're Looking For The Sr Data Engineer is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems or building new solutions from ground up. This role will work with developers, architects, product managers and data analysts on data initiatives and ensure optimal data delivery with good performance and uptime metrics. Your behaviors align strongly with our values because ours do. Some Of What You'll Do Scope of the Role: Direct Reports: This is an individual contributor role with no direct reports Key Responsibilities Create, enhance, and maintain optimal data pipeline architecture and implementations. Analyze data sets to meet functional / non-functional business requirements. Identify, design, and implement data process: automating processes, optimizing data delivery, etc. Build infrastructure and tools to increase data ETL velocity. Work with data and analytics experts to implement and enhance analytic product features. Provide life cycle support the Operations team for existing products, services, and functionality assigned to the Data Engineering team. Experience, Education, And Certifications Bachelor’s degree in Computer Science, Statistics, Informatics or related field or equivalent work experience. 5+ years of Software Development experience, including 3+ years of experience in Data Engineer fields. Experience in building and optimizing big data pipelines, architectures, and data sets: Experience with big data tools: Hadoop, Spark, Kafka, etc. Experience with relational SQL databases, such as PostgreSQL, MySQL, etc. Experience with stream-processing systems: Flink, KSQL, Spark-Streaming, etc. Experience with programming languages, such as Java, Scala, Python, etc. Experience with cloud data engineering and development, such as AWS, etc. Additional Requirements Familiar with Agile software design processes and methodologies. Good analytic skills related to working with structured and unstructured datasets. Knowledge of message queuing, stream processing and scalable big data stores. Ownership/accountability for tasks/projects with on time and quality deliveries. Good verbal and written communication skills. Teamwork with independent design and development habits. Work with a sense of urgency and positive attitude. Why You Should Join Us Join us as we write a new chapter, guided by world-class leadership. Come be a part of an exciting and growing organization where we offer a competitive total compensation, flexible/remote work and with a leadership team committed to fostering an inclusive, collaborative, and transparent organizational culture. At Syniverse connectedness is at the core of our business. We believe diversity, equity, and inclusion among our employees is crucial to our success as a global company as we seek to recruit, develop, and retain the most talented people who want to help us connect the world. Know someone at Syniverse? Be sure to have them submit you as a referral prior to applying for this position.
Posted 3 weeks ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About The Role: Eucloid is looking for a senior Data Engineer with hands-on expertise in Databricks to join our Data Platform team supporting various business applications. The ideal candidate will support development of data infrastructure on Databricks for our clients by participating in activities which may include starting from up- stream and down-stream technology selection to designing and building of different components. Candidate will also involve in projects like integrating data from various sources, managing big data pipelines that are easily accessible with optimized performance of overall ecosystem. The ideal candidate is an experienced data wrangler who will support our software developers, database architects and data analysts on business initiatives. You must be self-directed and comfortable supporting the data needs of cross-functional teams, systems, and technical solutions. Location: Chennai Qualifications: B.Tech/BS degree in Computer Science, Computer Engineering, Statistics, or other Engineering disciplines Min. 5 years of Professional work Experience, with 1+ years of hands-on experience with Databricks Highly proficient in SQL & Data model (conceptual and logical) concepts Highly proficient with Python & Spark (3+ year) Knowledge of Distributed computing and cloud databases like Redshift, Big query etc. 2+ years of Hands-on experience with one of the top cloud platforms - AWS/GCP/Azure. Experience with Modern Data stack tools like Airflow, Terraform, dbt, Glue, Data proc etc. Exposure to Hadoop& Shell scripting a plus Min 2 years, Databricks 1 year desirable, Python & Spark 1+ years, remove data model, SQL only, any cloud exp 1+ year Responsibilities Design, implementation, and improvement of processes & automation of Data infrastructure Tuning of Data pipelines for reliability & performance Building tools and scripts to develop, monitor, troubleshoot ETL’s Perform scalability, latency, and availability tests on a regular basis. Perform code reviews and QA data imported by various processes. Investigate, analyze, correct and document reported data defects. Create and maintain technical specification documentation. Eucloid offers a high growth path along with great compensation, which is among the best in the industry. Please reach out to chandershekhar.verma@eucloid.com if you want to apply.
Posted 3 weeks ago
5.0 - 31.0 years
3 - 6 Lacs
Vanasthalipuram, Hyderabad
On-site
**Hiring for immediate joining for our Vanasthalipuram, Miyapur, and Gachibowli offices.** Total : 3 Openings . About VXU Global, we’re the launchpad for Indian students and young professionals chasing academic and career excellence in 30 + countries. To keep up with skyrocketing demand, we need a seasoned visa champion who can accelerate our submission desk, raise quality, and coach the next generation of officers. Your Mission Own end-to-end visa files for all the Tier 1 and Tier 2 countries - USA, UK, Canada, Germany, France, Australia, NZ, Ireland and etc … plus the rest of our ever-growing map. Sharpen and scale our documentation playbooks, ensuring students glide through checkpoints with confidence. Be the in-house “Yes, we can” oracle—diving into new rules, new tools, and new countries without flinching. Key Responsibilities Visa File Mastery: Scrutinise financials, academics, SOP/LORs, ITRs, property valuations—spot red-flags before the embassy does. Policy Radar: Track 30 + visa regimes in real time; update internal checklists the moment an embassy so much as blinks. Mentor & Trainer: Create bite-size micro-trainings, shadow juniors on live files, and run weekly “Visa Pit-Stop” audits. Process Optimiser: Map SOPs, build TAT dashboards, and champion automation to slash turnaround times. Stakeholder Wrangler: Stay in lock-step with counselling, finance, loans, and CRM teams. Student Happiness: Jump on SOS calls with anxious parents and keep our NPS stratospheric (> 90 and climbing). Must-Have 5–8 years hands-on visa filing across multiple destinations; you’ve survived shifting rules and PDF chaos. Fluent in VFS, VAC, CAS, SEVIS, GTE, DS-160, CAQ, APS—acronyms are your comfort food. Continuous-learning mindset: devours updates, webinars, and policy briefs like popcorn. Proven experience mentoring or leading a small team—formal title optional, impact essential. Spreadsheet wizardry and familiarity with CRM/ticketing tools; bonus points for DIY automation hacks. Flexible for peak-intake crunch hours and dawn-patrol embassy slots. Attitude: default answer is “Sure, let’s figure it out.” Nice-to-Have Boosters Certifications: ICEF, QEAC, CCEA, or equivalent. Familiarity with education-loan workflows, GIC, blocked accounts, or student health-insurance norms. Comfort with process-mapping tools (Miro, Lucidchart, Napkin.ai) or basic RPA/AI stacks. Why VXU? Impact at Scale: Your expertise powers 1,000 + dreams a year. Career Runway: Visa Ops Lead → Global Compliance Head → Chief Mobility Officer. Perks: Competitive pay, performance bonus, health cover, learning credits “Every stamped visa is a dream in motion—let’s keep them rolling.”
Posted 3 weeks ago
0.0 - 8.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Job Profile – Lead Data Engineer Does working with data on a day to day basis excite you? Are you interested in building robust data architecture to identify data patterns and optimise data consumption for our customers, who will forecast and predict what actions to undertake based on data? If this is what excites you, then you’ll love working in our intelligent automation team. Schneider AI Hub is leading the AI transformation of Schneider Electric by building AI-powered solutions. We are looking for a savvy Data Engineer to join our growing team of AI and machine learning experts. You will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. The Data Engineer will support our software engineers, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. Responsibilities Create and maintain optimal data pipeline architecture; assemble large, complex data sets that meet functional / non-functional requirements. Design the right schema to support the functional requirement and consumption patter. Design and build production data pipelines from ingestion to consumption. Create necessary preprocessing and postprocessing for various forms of data for training/ retraining and inference ingestions as required. Create data visualization and business intelligence tools for stakeholders and data scientists for necessary business/ solution insights. Identify, design, and implement internal process improvements: automating manual data processes, optimizing data delivery, etc. Ensure our data is separated and secure across national boundaries through multiple data centers Requirements and Skills You should have a bachelors or master’s degree in computer science, Information Technology or other quantitative fields You should have at least 8 years working as a data engineer in supporting large data transformation initiatives related to machine learning, with experience in building and optimizing pipelines and data sets Strong analytic skills related to working with unstructured datasets. Experience with Azure cloud services, ADF, ADLS, HDInsight, Data Bricks, App Insights etc Experience in handling ETL’s using Spark. Experience with object-oriented/object function scripting languages: Python, Pyspark, etc. Experience with big data tools: Hadoop, Spark, Kafka, etc. Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc. You should be a good team player and committed for the success of team and overall project. About Us Schneider Electric™ creates connected technologies that reshape industries, transform cities and enrich lives. Our 144,000 employees thrive in more than 100 countries. From the simplest of switches to complex operational systems, our technology, software and services improve the way our customers manage and automate their operations. Great people make Schneider Electric a great company. We seek out and reward people for putting the customer first, being disruptive to the status quo, embracing different perspectives, continuously learning, and acting like owners. We want our employees to reflect the diversity of the communities in which we operate. We welcome people as they are, creating an inclusive culture where all forms of diversity are seen as a real value for the company. We’re looking for people with a passion for success — on the job and beyond. Primary Location : IN-Karnataka-Bangalore Schedule : Full-time Unposting Date : Ongoing
Posted 1 month ago
0 years
0 Lacs
Calcutta
On-site
Ready to shape the future of work? At Genpact, we don’t just adapt to change—we drive it. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory , our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that’s shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation, our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook We are inviting applications for the role of Principal Consultant – Data Engineer ! This role supports business enablement, which includes understanding of business trends, provide d ata driven solutions at scale. Hire will be responsible for developing, expanding and optimizing our data pipeline architecture, as well as optimizing data flow and collaboration from cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from ground up either in on-prem or in cloud (AWS/Azure). The data engineer will support our software developers, database architects, data analyst and data scientists on data initiatives and will ensure optimal data delivery architecture. Core data engineering work experience in Life Sciences/Healthcare /CPG for minimum 8 + years Work location: Bangalore Responsibilities Good years of professional experience in creating and maintaining optimal data pipeline architecture. Assemble large, complex data sets that meet functional/non-functional business requirements Experience working on warehousing systems, and an ability to contribute towards implanting end-to-end, loosely couple/decoupled technology solutions for data ingestion and processing, data storage, data access, and integration with business user centric analytics/business intelligence frameworks Advanced working SQL knowledge and experience working with relational databases, query authoring as well as working familiarity with a variety of databases A successful history of manipulating, processing and extracting value from large disconnected datasets Design, develop, and maintain scalable and resilient ETL/ELT pipelines for handling large volumes of complex data Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS/Azure big data toolsets Architecting and implementing data governance and security for data platforms on cloud Cloud certification will be an advantage but not a mandate for this role Experience in following software/ tools : Experience with relational SQL and NoSQL databases, including Postgres and MongoDB Experience with big data tools : Hadoop, Spark, Kafka, etc Experience with data pipeline and workflow management tools : Airflow, Luigi, etc Experience with AWS Cloud services or Azure cloud services Experience with scripting languages: Python or Java Understanding with stream-processing systems: Spark-Streaming, etc Strong project management and organizational skills Ability to comprehend business needs, convert them into BRD & TRD (Business/Technical requirement document), develop implementation roadmap and execute on time Effectively respond to requests for ad hoc analyses. Good verbal and written communication skills Ownership of tasks assigned without supervisory follow-up Proactive planner and can work independently to manage own responsibilities Personal drive and positive work ethic to deliver results within tight deadlines and in demanding situations Qualifications Minimum qualifications Master’s or bachelor’s in engineering - BE/B- Tech, BCA, MCA, BSc/MSc Master’s in science or related Why join Genpact? Be a transformation leader – Work at the cutting edge of AI, automation, and digital innovation Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career – Get hands-on experience, mentorship, and continuous learning opportunities Work with the best – Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Principal Consultant Primary Location India-Kolkata Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jul 1, 2025, 8:25:34 AM Unposting Date Ongoing Master Skills List Digital Job Category Full Time
Posted 1 month ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title: Data Engineer, EDM IV Location: Bangalore (Hybrid)/ Pan India Shift : General Shift Experience : 6 - 8 Years of relevant experience We are looking for an experienced Senior Data Engineer to join our Marketing Data Engineering Team. Reporting to Manager, Data Engineering, and is hybrid position in Bengaluru. Data Engineer will join the team to expand and optimize our data and data pipeline architecture and optimizing data flow and collection for cross-functional teams. Should be an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. The Data Engineer will support Software Developers, Data Quality Engineers, Data Analysts and Data Scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. Responsibilities Create and maintain optimal data pipeline architecture. Assemble complex data sets that meet functional / non-functional requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, DBT and AWS 'big data' technologies. Build analytics tools that utilize the data pipeline to provide actionable insights into employee experience, operational efficiency, and other key business performance metrics. Work with stakeholders to assist with data-related technical issues and support associated data infrastructure needs. Build processes supporting data transformation, data structures, metadata, dependency and workload management. Keep up to date with the latest and greatest in feature-sets and capabilities from public cloud providers (such as AWS and Azure) and find ways to apply them back help their team Work with data scientists and analysts to strive for greater functionality in our data systems. Minimum Qualifications We are looking for a candidate with 5+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. 5+ years of hands-on experience in Snowflake 5+ years of working in DBT with knowledge on advanced DBT concepts like macros and Jinja templating. Advanced working SQL experience working with relational databases, query authoring (SQL) and working familiarity with a variety of databases. Experience with scripting languages such as Python Experience with big data tools such as PySpark Experience with AWS cloud services used often for data engineering including S3, EC2, Glue, Lambda, RDS, or Redshift Experience working with APIs to pull and push data. Experience optimizing 'big data' data pipelines, architectures and data sets. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Preferred Qualification Experience working with AWS CloudFormation templates is a plus Familiarity with Agile and SCRUM methodologies is a plus Experience working with PowerBI to develop dashboards is a plus Analytical skills related to working with unstructured datasets. A successful history of processing value from large, disconnected datasets. Experience working with agile, globally distributed teams.
Posted 1 month ago
6.0 - 9.0 years
0 Lacs
Karnataka
On-site
Location Karnataka Bengaluru Experience Range 6 - 9 Years Job Description Job Overview We are looking for a savvy Data Engineer to join our growing team of data engineers. Thehire will be responsible for expanding and optimizing our data and data pipeline architecture,as well as optimizing data flow and collection for cross functional teams. The ideal candidateis an experienced data pipeline builder and data wrangler who enjoys optimizing data systemsand building them from the ground up. The Data Engineer will support our softwaredevelopers, database architects, and data analysts on data initiatives and will ensure optimaldata delivery architecture is consistent throughout ongoing projects. They must be selfdirectedand comfortable supporting the data needs of multiple teams,systems and products.The right candidate will be excited by the prospect of optimizing or even re-designing ourcompany’s data architecture to support our next generation of products and data initiatives. Responsibilities for Data Engineer Create and maintain optimal data pipeline architecture, Assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and spark on Azure ‘big data’ technologies. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Work with data and analytics experts to strive for greater functionality in our data systems. Qualifications for Data Engineer Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Experience building and optimizing ‘big data’ data pipelines, architectures and data sets. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Strong analytic skills related to working with unstructured datasets. Build processes supporting data transformation, data structures, metadata, dependency and workload management. A successful history of manipulating, processing and extracting value from large disconnected datasets. Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores. Strong project management and organizational skills. Experience supporting and working with cross-functional teams in a dynamic environment. We are looking for a candidate with 5+ years of experience in a Data Engineer role,having experience using the following software/tools: Experience with big data tools: Hadoop, Spark, Kafka, etc. Experience with relational SQL and NoSQL databases, including Azure SQL, CosmosDB, Couchbase. Experience with data pipeline and workflow management tools: Azure Data Factory, Synapse Pipeline. Experience with Azure cloud services: Databricks, Synapse Analytics, Azure Function, ADLS Experience with stream-processing systems: Storm, Spark-Streaming, etc. Experience with object-oriented/object function scripting languages: Python, Java, C++,Scala, etc.
Posted 1 month ago
5.0 years
0 Lacs
Chennai
On-site
POSITION SUMMARY BOT VFX, a post-production services company in the entertainment industry with global clients, is looking for an entry-level position “Sr Render Wrangler” on a permanent role basis. As a Senior Render Wrangler, you will play a crucial role in managing and monitoring the render jobs, ensuring the smooth running of our render farm. This role reports to the Digital Resource Supervisor and dotted line basis to the Central Production Manager. POSITION RESPONSIBILITY Monitor and manage render queues, prioritizing tasks to meet project deadlines. Troubleshoot and resolve rendering issues, optimizing render efficiency. Collaborate with the VFX team to ensure seamless integration of rendered elements. Maintain clear communication with supervisors and team members regarding render status and any potential issues. Assist in the development and refinement of render workflows and pipeline improvements. Provide technical support and guidance to other team members as needed. REQUIRED SKILLS 5+ years of experience in a render wrangler or similar role within a VFX or animation studio. Strong problem-solving skills and logical reasoning abilities. Proficiency in any given rendering software and an understanding of VFX workflows. Excellent communication skills, both written and verbal. Ability to work efficiently under tight deadlines and manage multiple tasks simultaneously. A team player with a positive attitude and a strong work ethic. ABOUT US BOT VFX is a renowned visual effects services company serving clients globally. With nearly 800 team members, operations in Chennai, Coimbatore, Pune and Atlanta, and over a dozen years of operating experience, the privately held company has enchanted clients with its mix of creative chops, scale, and distinctive, quirky culture. It's also the winner of four FICCI BAF awards and has a wide list of fans from Los Angeles to London and Montreal to Wellington.
Posted 1 month ago
0 years
0 Lacs
Ernakulam, Kerala, India
On-site
Company Description Pinnacle Jeep is a premier Jeep destination store located in Kochi, Kerala. We specialize in a wide range of Jeep models including the Compass, Meridian, Rubicon, Wrangler Unlimited, and Grand Cherokee. Our dealership is renowned for offering top-notch customer service and an excellent selection of Jeep vehicles. Join our team to be part of a dynamic environment focused on automotive excellence. Role Description This is a full-time on-site role for a Service Technician located in Ernakulam. The Service Technician will be responsible for performing maintenance and repair on Jeep vehicles, diagnosing and troubleshooting issues, providing field service, and ensuring top-notch customer service. Daily tasks include conducting routine inspections, performing repairs, and interacting with customers to address their vehicle concerns. Qualifications Maintenance & Repair skills Strong Troubleshooting abilities Experience in Field Service Excellent Customer Service skills Ability to perform routine inspections and repairs Strong communication and interpersonal skills Experience with Jeep vehicles is a plus Certification in automotive service technology or related field preferred
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough