Jump to content

adamsre

Members
  • Content Count

    13
  • Joined

  • Last visited

  • Feedback

    N/A

Community Reputation

2 Gathering Thatch

About adamsre

  • Rank
    Naked

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I also wanted to point out that due to budget, they likely do what is known as "over subscribing". What this is, is you take 2 physical CPU's that a physical server has, and their 10-20 cores per CPU (on the newer Xeon processors), and they virtually carve those up into say 4 cores each. If there are only 20 CPU cores total available, and you have eight server maps running on this box, then basic math would tell you that you need 32 cores for each server to function. Oversubscribing plays off of the thought that all 8 of these servers are not at full load all of the time, and they are able to borrow CPU cycles from the other servers that aren't currently utilizing them. When players DO log in and start to play, the available CPU cycles diminish, and ultimately, we run into the issue of the virtual server waiting on a CPU cycle that another server is currently using. This is another reason for lag in the game - oversubscribing these physical servers with more maps than they should have. Available budget would allow them to truly allocate and dedicate system resources to the underlying servers to where they aren't "borrowing" resources from one another. By the way, system RAM performs the same way in a virtual server environment. The caveat to not practicing over subscribing, is that if a server has no players, or minimal players, you are ultimately wasting resources - this is why they look to remove low pop servers first. It's an easy way of recovering physical server resources for them.
  2. Good call, Barium, and I think you have a perfect example there. Something to think about - architecture-wise with these servers, is that the save occurs on the same physical box (or virtual box). In the past, when I have designed server farms where disk i/o plays a crucial factor in performance (which, in this game, it does), the best course of action is to configure the server to save off to a separate spindle, or set of disks. When I mentioned tiered storage in my previous "wall of text", this was what I was referring to. You have a set of disks that are designated for ultra-high speed / i/o (perhaps enterprise class SSD or M.2 drives) - this would be the set that a database, or the game files would run on. You have another set of slightly slower (perhaps 10,000 RPM spindle drives - which are cheaper than enterprise SSD's) where the game saves occur. This accomplishes two things, right off the bat : You remove the disk i/o problem from what causes the server to lag (it's waiting on the discs to complete their writing of the save file), and it separates the save files from the game files, meaning that you increase the likelihood that your save files are valid, along with having a faster means of recovery, should the primary game disks fail. So, for each physical server, they likely have 6-8 virtual servers residing on them. If each physical server has a SCSI-attached / Fiber-attached / or even SAN-attached (through iSCSI or other means), you could designate a LUN (a specific SET of disks that are combined into an array of disks), you could earmark each LUN to a specific virtual, or smaller set of virtual servers. If you have ever been on discord with friends and they are playing on a different server, you may have noticed that you both tend to lag at, or around the same time. This leads me to believe that they have one, very large SAN that ALL of the servers in that group attach to. If ALL of the servers in that group are saving at the identical time, then the system gets overburdened and will cause that nasty 15-30 (or more...) second freeze. If they had the budget, some very simple architectural changes would give us a near instantaneous relief of this lag every 15 minutes that a save occurs. Of course, optimization in other areas and additional server CPU cores and memory does indeed increase performance. My personal home server is an older-class DELL R710 with only two physical CPUS, that have 6 cores for each physical CPU and 128GB of RAM. I have 6 physical disks in this box that have 4 disks in a RAID-5 array, and the remaining 2 disks are in a RAID-1 array. I can comfortably run 4 maps simultaneously on this box - and have load tested it with about 20 players running on each map. Of course, we don't have 500 bases and a myriad of dinos sitting on display as some of these larger tribes do, but we have spawned in hundreds of dinos on each map to test my theory. I also have my home Plex server and a Cisco VIRL server (which is pretty resource intensive) running off of the physical hardware. In short, this allows me to earmark (dedicate) 16GB of RAM and 4 CPU cores per map. Most of the servers that you can rent only have 8GB of RAM, and some allow you to upgrade to 12GB. I can assure you that more memory, better disk allocation and some eventual optimization of code will most definitely increase the playability of the maps. I do agree that more thought needs to go into how many dinos a tribe has out on display, as it most definitely does impact performance. There is little need with the kibble re-work, to have so many dinos out on display at a given time. Even if you have a bunch of boss rexes out breeding, you can put them up when you're done. However, this brings up another issue - in the case of breeding something like a boss rex - it takes a long time to wake those dumb things up with the cryo cooldown. If they limit the number of tames that you have out, they need to re-think the cryo cooldown, or do away with it altogether on at least PVE servers. I understand why they introduced it on PVP. Thoughts?
  3. I like it - but don't forget that you have to first farm whatever it is that you are planning on feeding that gacha - which takes your time and resources, lest the gacha probably die
  4. LOL... close... servers, network, security, bandwidth, support, maintenance, facilities, AND PEOPLE are expensive
  5. So, I've seen some excellent pros and cons to both sides of the discussion. If I may, let me interject my opinion and past experience of being a 30-year Network / Server Infrastructure Architect. If they host their own servers, there is the initial expense of the physical server hardware, which in most cases exceeds $15,000-$25,000 per box (for a decent one). There are subscription fees that they pay for licensing the operating systems - even if it something like CentOS / RedHat / BSD. Check their service models out - if you want support (which they do), it is generally licensed by the physical CPU, or by the number of cores that the CPU has. When you get into licensing Windows boxes, it's even more. Depending on the back-end database for this game, you have not only the physical hardware licensing requirements, (per core / CPU), but you also have a per-seat license as well. This is a license that is required for every other server that will be accessing that particular database server. Some vendors even have a license per-database, and as you can imagine, there are a ton of databases, as there are a ton of servers. I can tell you from experience, a game of this magnitude requires a strong, high-performing database with exceptionally high availability and available i/o, which is also, as you might imagine, extremely expensive. Network and security infrastructure, such as Palo Alto, Citrix, Cisco, etc., are all once again a licensed and maintenance expense model. A single Cisco switch - or even a single virtual switch, such as as mid-line Nexus, that is capable of delivering that data at the throughput that I suspect this game requires, and perform near real-time / high-speed mathematical functions and database queries can cost into the hundreds of thousands of dollars - a single switch, mind you. This is all without factoring in the ongoing maintenance costs. The physical infrastructure - meaning the physical cable plant within their facilities / datacenter will need initial install, maintenance, repair, and eventual upgrade over time. Think also what it takes to keep an environment like that cool, provide adequate power, fire protection and suppression, physical security (biometrics, man traps, cameras, etc...). It's a massive endeavor, to say the very least. Backups - they do have to roll us back from time to time... that takes money... The software alone to archive off our backups is expensive, and again, typically a subscription model. This doesn't even scratch the surface on expense associated with tiered term storage arrays - look up Equalogic, for example, and get a base price for a home model and then multiply that by many, many magnitude more than what an average person needs to backup their personal docs / pictures / home movies. With that initial part out of the way, the most often overlooks portion of all of this is Life Cycle Management. This means that you need to plan for replacing all of the above physical hardware when it outlives its usefulness or performance levels. Even if the equipment is leased, this is still an exceptionally painful (nobody ever wants their server to be down, and coordinating all of that with engineers, devs, the leasing company is a general nightmare), and expensive process, given all of the people that it takes to replace the equipment. With the physical side of things out of the way, let's discuss manpower requirements for such an operation. Server admins, database admins, monitoring services (subscriptions), security monitoring, ongoing bandwidth expenses, physical security personnel, general maintenance personnel - then insurance, taxes, and the list goes on... Let's turn our focus on what most think about - the development / bug fixes side of things, knowing that I really didn't but barely scratch the surface on the network / security / physical infrastructure side of things. Most dev shops will modularize their code in such a manner that it can be worked on independently - in different teams. You might have an authentication team, a network optimization team, a security team - or they may be broken down into their specific languages, such as C++, C#, PHP, Java, Java Beans - or they may be scripting-specific languages for performing various automated tasks within the environment - not just within the game. Then there are graphics artists, OpenGL coders, DirectX, UE engine codes, etc. The point being is that it takes a lot of fingers in the pie to make this game all come together - the more fingers in the pie, the more money involved. I won't even get into the leadership, or administrative expenses associated. The bottom line is that no, all of this takes money - on a monthly basis to pull something like this off. A single copy sale model generates revenue once, and once only. It cannot begin to cover the ongoing associated costs of making a game like this available to the general public. Particularly so, if folks buy the initial copy at a discounted rate (which someone already pointed out). Take a page out of WoW's playbook - offer paid name changes (PLEASE lol), temporary XP or breeding bosts, skins, swag (t-shirts, mugs, hats), stuffed animals, unique mounts - something to generate additional, revenue sources with which to keep all of the hardware running, current, updated and patched, as well as keep the devs and other admins happy and proud to work at WC. Most folks that are not in the development, network, security, storage, archival, digital art, or gaming industry fields, have no true realization about what is required to make our characters walk, run, swing a sword, and definitely have no idea the sheer volume of math and computations going on in the background when that same sword strikes another player or NPC. To make our characters come to life, the long and short of it is that it takes money, and lots of it. I am all for a reasonable subscription model - somewhere in the neighborhood of $9.99USD and $14.99USD. All of this is predicated upon them using the funds to truly provide adequate server resources required for optimal performance, as well as expanding upon their development, debugging, code review, unit testing, beta testing (I bet a LOT of folks would be more than willing to be FREE testers of code BEFORE they release and folks get ARKed), along with optimal network / security infrastructure and its associated expenses. In short, if they were to actually use the subscription fees for their intended use, this game could potentially be one of the best out there. As previously mentioned by another poster, I think that this model should really only be intended for those that wish to play on an official server. If you host your own, or play single player, the intial licensed copy of the game should suffice as your on-ramp to the game.
  6. I've gotten nothing out of fishing yet - other than 2, or 3 trout. I've tried a primitive rod, mastercraft rod, blood, sap and honey. They swim right up to me and the lure and turn their noses up at it. Is there a method to casting that I'm missing or something?
  7. Is there a reason that this very polite request was buried and inaccessible through your search engine and the normal forum threads?
  8. So, I do find it humorous that there is so much animosity towards the OP on wiping official - particularly concerning folks losing countless hours of time taming, building, breeding, advancing their characters, yet it's perfectly acceptable to wipe legacy servers? The vast majority of those legacy players have been playing, building, breeding, etc., far longer than most of the official players - by a long shot. Personally, I have been playing since early 2016, and have endured several biome changes which destroyed several of my bases, the introduction of the un-tuned Giga, which subsequently destroyed about 200 dinos that I had at the time, griefers on the server, etc... Regardless, I have put in at least as much time as any others of you official players (check my Steam profile), but it's ok to call for a wipe for legacy? Put yourself in our shoes - there's a reason we stay on legacy. Yes, the communities are generally more hospitable than official, but the primary reason is that we are well-established, and we no more want to start over than any of you saying that it would kill the game for you. Your time is more valuable than anyone's on legacy? We all paid for the game - we all expect to be able to play the game in the manner that it was designed. What's good for the goose is good for the gander - call for a wipe on legacy (and I know this thread was not started with that intent towards legacy, but I just had to comment, given some of your replies about how it would be suicidal for WC to do such), and I say that I call for a wipe on all servers - across the board. If memory serves, legacy was original separated due to duping, over-mutated dinos, and in general, cheaters. Take a look at your official servers now, and tell me where the bulk of that has gone? Go ahead and flame me - I know it's coming, but you and I both know the truth about time and effort invested on both sides of the fence, and where the true issue is now. Log on any official server and look at a Giga, Boss Rex, Mana, Argent, etc... and compare it to a legacy variant of the same.
  9. Start providing backups of live servers again I know the devs have a lot on their plates already, but for the sake of those on the servers that they are hoping to decommission, as well as to those on officials that might want to spin up their own VM's, please revisit the practice of providing periodic backups of servers that are still active. The last available backups are from July, 2019. For those that aren't aware of what I am referring to, here's the link that they used to provide : https://survivetheark.com/index.php?/server-backups/ WC used to provide a near-nightly update - even weekly would be wonderful. Thanks!
  10. Cool, maybe I was just getting good rolls before The last number of breedings that I've been doing are 1-3 females max - Tusos, Basiliosaurs, Thylas, Megalos, Megatheriums, Featherlights, etc - so a good range of dino sizes and normal maturation times. I've been rolling nothing but seemingly super high breedings interevals - so has my other tribemates. We both must have been getting super lucky before haha. Thanks!
  11. Long Breeding Interval Hey all, Has anyone experienced an unusually long (in comparison with prior to the recent event - not during) breeding interval? 1 day, 20+ hours, to over 2 day breeding intervals? I know we all got spoiled with the event, but I don't recall having intervals this long prior to the event and following the recent patch. Thanks, Matrices
  12. I 100th this suggestion - along with the failed idea that an advanced tek item, such as a tek trough should have a smaller radius than a entry level trough. These are supposedly "high tech" items in the game. They should perform as such. While you're at it, fix the starvation bug with multiple dinos that are well within range of troughs and are carrying full inventories of food. The TLDR version of this is FIX the game before you add more broken stuff.
×
×
  • Create New...